Sample records for executed large generator

  1. Dynamic Test Generation for Large Binary Programs

    DTIC Science & Technology

    2009-11-12

    the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic

  2. Targeted enrichment strategies for next-generation plant biology

    Treesearch

    Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua Udall

    2012-01-01

    The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...

  3. Industry/government seminar on Large Space systems technology: Executive summary

    NASA Technical Reports Server (NTRS)

    Scala, S. M.

    1978-01-01

    The critical technology developments which the participating experts recommend as being required to support the early generation large space systems envisioned as space missions during the years 1985-2000 are summarized.

  4. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  5. Effectiveness comparison of partially executed t-way test suite based generated by existing strategies

    NASA Astrophysics Data System (ADS)

    Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur

    2015-05-01

    Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

  6. The contribution of executive control to semantic cognition: Convergent evidence from semantic aphasia and executive dysfunction.

    PubMed

    Thompson, Hannah E; Almaghyuli, Azizah; Noonan, Krist A; Barak, Ohr; Lambon Ralph, Matthew A; Jefferies, Elizabeth

    2018-01-03

    Semantic cognition, as described by the controlled semantic cognition (CSC) framework (Rogers et al., , Neuropsychologia, 76, 220), involves two key components: activation of coherent, generalizable concepts within a heteromodal 'hub' in combination with modality-specific features (spokes), and a constraining mechanism that manipulates and gates this knowledge to generate time- and task-appropriate behaviour. Executive-semantic goal representations, largely supported by executive regions such as frontal and parietal cortex, are thought to allow the generation of non-dominant aspects of knowledge when these are appropriate for the task or context. Semantic aphasia (SA) patients have executive-semantic deficits, and these are correlated with general executive impairment. If the CSC proposal is correct, patients with executive impairment should not only exhibit impaired semantic cognition, but should also show characteristics that align with those observed in SA. This possibility remains largely untested, as patients selected on the basis that they show executive impairment (i.e., with 'dysexecutive syndrome') have not been extensively tested on tasks tapping semantic control and have not been previously compared with SA cases. We explored conceptual processing in 12 patients showing symptoms consistent with dysexecutive syndrome (DYS) and 24 SA patients, using a range of multimodal semantic assessments which manipulated control demands. Patients with executive impairments, despite not being selected to show semantic impairments, nevertheless showed parallel patterns to SA cases. They showed strong effects of distractor strength, cues and miscues, and probe-target distance, plus minimal effects of word frequency on comprehension (unlike semantic dementia patients with degradation of conceptual knowledge). This supports a component process account of semantic cognition in which retrieval is shaped by control processes, and confirms that deficits in SA patients reflect difficulty controlling semantic retrieval. © 2018 The Authors. Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  7. Concept For Generation Of Long Pseudorandom Sequences

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1990-01-01

    Conceptual very-large-scale integrated (VLSI) digital circuit performs exponentiation in finite field. Algorithm that generates unusually long sequences of pseudorandom numbers executed by digital processor that includes such circuits. Concepts particularly advantageous for such applications as spread-spectrum communications, cryptography, and generation of ranging codes, synthetic noise, and test data, where usually desirable to make pseudorandom sequences as long as possible.

  8. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory referencemore » aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.« less

  9. Performance regression manager for large scale systems

    DOEpatents

    Faraj, Daniel A.

    2017-10-17

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  10. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  11. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  12. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  13. Gene Expression Analysis: Teaching Students to Do 30,000 Experiments at Once with Microarray

    ERIC Educational Resources Information Center

    Carvalho, Felicia I.; Johns, Christopher; Gillespie, Marc E.

    2012-01-01

    Genome scale experiments routinely produce large data sets that require computational analysis, yet there are few student-based labs that illustrate the design and execution of these experiments. In order for students to understand and participate in the genomic world, teaching labs must be available where students generate and analyze large data…

  14. Performance regression manager for large scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less

  15. An empirical analysis of executive behaviour with hospital executive information systems in Taiwan.

    PubMed

    Huang, Wei-Min

    2013-01-01

    Existing health information systems largely only support the daily operations of a medical centre, and are unable to generate the information required by executives for decision-making. Building on past research concerning information retrieval behaviour and learning through mental models, this study examines the use of information systems by hospital executives in medical centres. It uses a structural equation model to help find ways hospital executives might use information systems more effectively. The results show that computer self-efficacy directly affects the maintenance of mental models, and that system characteristics directly impact learning styles and information retrieval behaviour. Other results include the significant impact of perceived environmental uncertainty on scan searches; information retrieval behaviour and focused searches on mental models and perceived efficiency; scan searches on mental model building; learning styles and model building on perceived efficiency; and finally the impact of mental model maintenance on perceived efficiency and effectiveness.

  16. Self-assembling software generator

    DOEpatents

    Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM

    2011-11-25

    A technique to generate an executable task includes inspecting a task specification data structure to determine what software entities are to be generated to create the executable task, inspecting the task specification data structure to determine how the software entities will be linked after generating the software entities, inspecting the task specification data structure to determine logic to be executed by the software entities, and generating the software entities to create the executable task.

  17. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  18. Analyzing the test process using structural coverage

    NASA Technical Reports Server (NTRS)

    Ramsey, James; Basili, Victor R.

    1985-01-01

    A large, commercially developed FORTRAN program was modified to produce structural coverage metrics. The modified program was executed on a set of functionally generated acceptance tests and a large sample of operational usage cases. The resulting structural coverage metrics are combined with fault and error data to evaluate structural coverage. It was shown that in the software environment the functionally generated tests seem to be a good approximation of operational use. The relative proportions of the exercised statement subclasses change as the structural coverage of the program increases. A method was also proposed for evaluating if two sets of input data exercise a program in a similar manner. Evidence was provided that implies that in this environment, faults revealed in a procedure are independent of the number of times the procedure is executed and that it may be reasonable to use procedure coverage in software models that use statement coverage. Finally, the evidence suggests that it may be possible to use structural coverage to aid in the management of the acceptance test processed.

  19. PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis

    PubMed Central

    2014-01-01

    Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system. PMID:24894600

  20. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    PubMed

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system.

  1. [Ecologic evaluation in the cognitive assessment of brain injury patients: generation and execution of script].

    PubMed

    Baguena, N; Thomas-Antérion, C; Sciessere, K; Truche, A; Extier, C; Guyot, E; Paris, N

    2006-06-01

    Assessment of executive functions in an everyday life activity, evaluating brain injury subjects with script generation and execution tasks. We compared a script generation task to a script execution task, whereby subjects had to make a cooked dish. Two grids were used for the quotation, qualitative and quantitative, as well as the calculation of an anosognosis score. We checked whether the execution task was more sensitive to a dysexecutive disorder than the script generation task and compared the scores obtained in this evaluation with those from classical frontal tests. Twelve subjects with brain injury 6 years+/-4.79 ago and 12 healthy control subjects were tested. The subjects carried out a script generation task whereby they had to explain the necessary stages to make a chocolate cake. They also had to do a script execution task corresponding to the cake making. The 2 quotation grids were operational and complementary. The quantitative grid is more sensitive to a dysexecutive disorder. The brain injury subjects made more errors in the execution task. It is important to evaluate the executive functions of subjects with brain injury in everyday life tasks, not just in psychometric or script-generation tests. Indeed the ecological realization of a very simple task can reveal executive function difficulties such as the planning or the sequencing of actions, which are under-evaluated in laboratory tests.

  2. Using Planning, Scheduling and Execution for Autonomous Mars Rover Operations

    NASA Technical Reports Server (NTRS)

    Estlin, Tara A.; Gaines, Daniel M.; Chouinard, Caroline M.; Fisher, Forest W.; Castano, Rebecca; Judd, Michele J.; Nesnas, Issa A.

    2006-01-01

    With each new rover mission to Mars, rovers are traveling significantly longer distances. This distance increase raises not only the opportunities for science data collection, but also amplifies the amount of environment and rover state uncertainty that must be handled in rover operations. This paper describes how planning, scheduling and execution techniques can be used onboard a rover to autonomously generate and execute rover activities and in particular to handle new science opportunities that have been identified dynamically. We also discuss some of the particular challenges we face in supporting autonomous rover decision-making. These include interaction with rover navigation and path-planning software and handling large amounts of uncertainty in state and resource estimations. Finally, we describe our experiences in testing this work using several Mars rover prototypes in a realistic environment.

  3. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  4. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  5. Metaphorically speaking: cognitive abilities and the production of figurative language.

    PubMed

    Beaty, Roger E; Silvia, Paul J

    2013-02-01

    Figurative language is one of the most common expressions of creative behavior in everyday life. However, the cognitive mechanisms behind figures of speech such as metaphors remain largely unexplained. Recent evidence suggests that fluid and executive abilities are important to the generation of conventional and creative metaphors. The present study investigated whether several factors of the Cattell-Horn-Carroll model of intelligence contribute to generating these different types of metaphors. Specifically, the roles of fluid intelligence (Gf), crystallized intelligence (Gc), and broad retrieval ability (Gr) were explored. Participants completed a series of intelligence tests and were asked to produce conventional and creative metaphors. Structural equation modeling was used to assess the contribution of the different factors of intelligence to metaphor production. For creative metaphor, there were large effects of Gf (β = .45) and Gr (β = .52); for conventional metaphor, there was a moderate effect of Gc (β = .30). Creative and conventional metaphors thus appear to be anchored in different patterns of abilities: Creative metaphors rely more on executive processes, whereas conventional metaphors primarily draw from acquired vocabulary knowledge.

  6. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less

  7. The Use and Validation of Qualitative Methods Used in Program Evaluation.

    ERIC Educational Resources Information Center

    Plucker, Frank E.

    When conducting a two-year college program review, there are several advantages to supplementing the standard quantitative research approach with qualitative measures. Qualitative research does not depend on a large number of random samples, it uses a flexible design which can be refined as the research is executed, and it generates findings in a…

  8. Task Decomposition Module For Telerobot Trajectory Generation

    NASA Astrophysics Data System (ADS)

    Wavering, Albert J.; Lumia, Ron

    1988-10-01

    A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.

  9. "My Mind Is Doing It All": No "Brake" to Stop Speech Generation in Jargon Aphasia.

    PubMed

    Robinson, Gail A; Butterworth, Brian; Cipolotti, Lisa

    2015-12-01

    To study whether pressure of speech in jargon aphasia arises out of disturbances to core language or executive processes, or at the intersection of conceptual preparation. Conceptual preparation mechanisms for speech have not been well studied. Several mechanisms have been proposed for jargon aphasia, a fluent, well-articulated, logorrheic propositional speech that is almost incomprehensible. We studied the vast quantity of jargon speech produced by patient J.A., who had suffered an infarct after the clipping of a middle cerebral artery aneurysm. We gave J.A. baseline cognitive tests and experimental word- and sentence-generation tasks that we had designed for patients with dynamic aphasia, a severely reduced but otherwise fairly normal propositional speech thought to result from deficits in conceptual preparation. J.A. had cognitive dysfunction, including executive difficulties, and a language profile characterized by poor repetition and naming in the context of relatively intact single-word comprehension. J.A.'s spontaneous speech was fluent but jargon. He had no difficulty generating sentences; in contrast to dynamic aphasia, his sentences were largely meaningless and not significantly affected by stimulus constraint level. This patient with jargon aphasia highlights that voluminous speech output can arise from disturbances of both language and executive functions. Our previous studies have identified three conceptual preparation mechanisms for speech: generation of novel thoughts, their sequencing, and selection. This study raises the possibility that a "brake" to stop message generation may be a fourth conceptual preparation mechanism behind the pressure of speech characteristic of jargon aphasia.

  10. An Adaptive Memory Interface Controller for Improving Bandwidth Utilization of Hybrid and Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio

    Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less

  11. Directed Incremental Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  12. Testing large flats with computer generated holograms

    NASA Astrophysics Data System (ADS)

    Pariani, Giorgio; Tresoldi, Daniela; Spanò, Paolo; Bianco, Andrea

    2012-09-01

    We describe the optical test of a large flat based on a spherical mirror and a dedicated CGH. The spherical mirror, which can be accurately manufactured and tested in absolute way, allows to obtain a quasi collimated light beam, and the hologram performs the residual wavefront correction. Alignment tools for the spherical mirror and the hologram itself are encoded in the CGH. Sensitivity to fabrication errors and alignment has been evaluated. Tests to verify the effectiveness of our approach are now under execution.

  13. Improving insight and non-insight problem solving with brief interventions.

    PubMed

    Wen, Ming-Ching; Butler, Laurie T; Koutstaal, Wilma

    2013-02-01

    Developing brief training interventions that benefit different forms of problem solving is challenging. In earlier research, Chrysikou (2006) showed that engaging in a task requiring generation of alternative uses of common objects improved subsequent insight problem solving. These benefits were attributed to a form of implicit transfer of processing involving enhanced construction of impromptu, on-the-spot or 'ad hoc' goal-directed categorizations of the problem elements. Following this, it is predicted that the alternative uses exercise should benefit abilities that govern goal-directed behaviour, such as fluid intelligence and executive functions. Similarly, an indirect intervention - self-affirmation (SA) - that has been shown to enhance cognitive and executive performance after self-regulation challenge and when under stereotype threat, may also increase adaptive goal-directed thinking and likewise should bolster problem-solving performance. In Experiment 1, brief single-session interventions, involving either alternative uses generation or SA, significantly enhanced both subsequent insight and visual-spatial fluid reasoning problem solving. In Experiment 2, we replicated the finding of benefits of both alternative uses generation and SA on subsequent insight problem-solving performance, and demonstrated that the underlying mechanism likely involves improved executive functioning. Even brief cognitive- and social-psychological interventions may substantially bolster different types of problem solving and may exert largely similar facilitatory effects on goal-directed behaviours. © 2012 The British Psychological Society.

  14. Integrated verification and testing system (IVTS) for HAL/S programs

    NASA Technical Reports Server (NTRS)

    Senn, E. H.; Ames, K. R.; Smith, K. A.

    1983-01-01

    The IVTS is a large software system designed to support user-controlled verification analysis and testing activities for programs written in the HAL/S language. The system is composed of a user interface and user command language, analysis tools and an organized data base of host system files. The analysis tools are of four major types: (1) static analysis, (2) symbolic execution, (3) dynamic analysis (testing), and (4) documentation enhancement. The IVTS requires a split HAL/S compiler, divided at the natural separation point between the parser/lexical analyzer phase and the target machine code generator phase. The IVTS uses the internal program form (HALMAT) between these two phases as primary input for the analysis tools. The dynamic analysis component requires some way to 'execute' the object HAL/S program. The execution medium may be an interpretive simulation or an actual host or target machine.

  15. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  16. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows.

    PubMed

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.

  17. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows

    PubMed Central

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575

  18. OCSEGen: Open Components and Systems Environment Generator

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana

    2014-01-01

    To analyze a large system, one often needs to break it into smaller components.To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.

  19. Application driven interface generation for EASIE. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kao, Ya-Chen

    1992-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.

  20. The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1978-01-01

    Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.

  1. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  2. Pseudo-Random Number Generation in Children with High-Functioning Autism and Asperger's Disorder: Further Evidence for a Dissociation in Executive Functioning?

    ERIC Educational Resources Information Center

    Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.

    2006-01-01

    The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…

  3. An Extended Proof-Carrying Code Framework for Security Enforcement

    NASA Astrophysics Data System (ADS)

    Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab

    The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).

  4. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.

    PubMed

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-06-02

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.

  5. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems

    PubMed Central

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G.; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-01-01

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances. PMID:28574426

  6. Executive Dysfunction in OSA Before and After Treatment: A Meta-Analysis

    PubMed Central

    Olaithe, Michelle; Bucks, Romola S.

    2013-01-01

    Study Objectives: Obstructive sleep apnea (OSA) is a frequent and often underdiagnosed condition that is associated with upper airway collapse, oxygen desaturation, and sleep fragmentation leading to cognitive dysfunction. There is meta-analytic evidence that subdomains of attention and memory are affected by OSA. However, a thorough investigation of the impact of OSA on different subdomains of executive function is yet to be conducted. This report investigates the impact of OSA and its treatment, in adult patients, on 5 theorized subdomains of executive function. Design: An extensive literature search was conducted of published and unpublished materials, returning 35 studies that matched selection criteria. Meta-analysis was used to synthesize the results from studies examining the impact of OSA on executive functioning compared to controls (21 studies), and before and after treatment (19 studies); 5 studies met inclusion in both categories. Measurements: Research papers were selected which assessed 5 subdomains of executive function: Shifting, Updating, Inhibition, Generativity, and Fluid Reasoning. Results: All 5 domains of executive function demonstrated medium to very large impairments in OSA independent of age and disease severity. Furthermore, all subdomains of executive function demonstrated small to medium improvements with CPAP treatment. Discussion: Executive function is impaired across all five domains in OSA; these difficulties improved with CPAP treatment. Age and disease severity did not moderate the effects found; however, further studies are needed to explore the extent of primary and secondary effects, and the impact of age and premorbid intellectual ability (cognitive reserve). Citation: Olaithe M; Bucks RS. Executive dysfunction in OSA before and after treatment: a meta-analysis. SLEEP 2013;36(9):1297-1305. PMID:23997362

  7. Third US T-G supplier: now or later

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincicome, R.A.

    The Utility Power Corporation (UPC) anticipates that a rush of new orders for turbines and generators will enable it to become the third domestic manufacturer of the large units. A joint venture of Allis-Chalmers and the West German firm Kraftwerk Union (KWU), UPC has waited 18 months without receiving a single order. A manufacturing site was acquired and negotiations with electric utility executives are expected to bring in bids. KWU, while disappointed, confirms its long-term goal of serving American utilities. UPC has revised its original schedule in the face of lagging electrical demand and a variety of economic and sitingmore » constraints, but sees a growing acceptance of large turbine-generators and for foreign technology. UPC will pursue a single-line of responsibility in its marketing strategy. (DCK)« less

  8. Efficiency of Executive Function: A Two-Generation Cross-Cultural Comparison of Samples From Hong Kong and the United Kingdom.

    PubMed

    Ellefson, Michelle R; Ng, Florrie Fei-Yin; Wang, Qian; Hughes, Claire

    2017-05-01

    Although Asian preschoolers acquire executive functions (EFs) earlier than their Western counterparts, little is known about whether this advantage persists into later childhood and adulthood. To address this gap, in the current study we gave four computerized EF tasks (providing measures of inhibition, working memory, cognitive flexibility, and planning) to a large sample ( n = 1,427) of 9- to 16-year-olds and their parents. All participants lived in either the United Kingdom or Hong Kong. Our findings highlight the importance of combining developmental and cultural perspectives and show both similarities and contrasts across sites. Specifically, adults' EF performance did not differ between the two sites; age-related changes in executive function for both the children and the parents appeared to be culturally invariant, as did a modest intergenerational correlation. In contrast, school-age children and young adolescents in Hong Kong outperformed their United Kingdom counterparts on all four EF tasks, a difference consistent with previous findings from preschool children.

  9. A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.

    PubMed

    Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-10-08

    Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting," and "treatments for liver cancer") was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks.

  10. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  11. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  12. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  13. Intelligent scheduling of execution for customized physical fitness and healthcare system.

    PubMed

    Huang, Chung-Chi; Liu, Hsiao-Man; Huang, Chung-Lin

    2015-01-01

    Physical fitness and health of white collar business person is getting worse and worse in recent years. Therefore, it is necessary to develop a system which can enhance physical fitness and health for people. Although the exercise prescription can be generated after diagnosing for customized physical fitness and healthcare. It is hard to meet individual execution needs for general scheduling of physical fitness and healthcare system. So the main purpose of this research is to develop an intelligent scheduling of execution for customized physical fitness and healthcare system. The results of diagnosis and prescription for customized physical fitness and healthcare system will be generated by fuzzy logic Inference. Then the results of diagnosis and prescription for customized physical fitness and healthcare system will be scheduled and executed by intelligent computing. The scheduling of execution is generated by using genetic algorithm method. It will improve traditional scheduling of exercise prescription for physical fitness and healthcare. Finally, we will demonstrate the advantages of the intelligent scheduling of execution for customized physical fitness and healthcare system.

  14. Metacognitive and emotional/motivational executive functions in individuals with autism spectrum disorder and attention deficit hyperactivity disorder: preliminary results.

    PubMed

    Panerai, Simonetta; Tasca, Domenica; Ferri, Raffaele; Catania, Valentina; Genitori D'Arrigo, Valentina; Di Giorgio, Rosa; Zingale, Marinella; Trubia, Grazia; Torrisi, Anna; Elia, Maurizio

    2016-01-01

    Deficits in executive functions (EF) are frequently observed in autism spectrum disorder (ASD) and in attention deficit hyperactivity disorder (ADHD). The aim of this study was to evaluate executive performances of children with ASD and ADHD, and then make between-group comparisons as well as comparisons with a control group. A total of 58 subjects were recruited, 17 with ASD but without intellectual impairment, 18 with ADHD-combined presentation and 23 with typical development, matched on gender, chronological age and intellectual level. They were tested on some EF domains, namely planning, mental flexibility, response inhibition and generativity, which account for both metacognitive and emotional/motivational executive functions. Results. Results showed a large overlapping of EF dysfunctions in ASD and ADHD and were not indicative of the presence of two real distinct EF profiles. Nevertheless, in ADHD, a more severe deficit in prepotent response inhibition (emotional/motivational EF) was found. Results are partially consistent with those found in the literature. Further studies with larger samples are needed to determine how ASD and ADHD differ in terms of their strengths and weaknesses across EF domains.

  15. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  16. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  17. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  18. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition

    PubMed Central

    Munoz-Organero, Mario; Ruiz-Blazquez, Ramona

    2017-01-01

    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware. PMID:28208736

  19. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.

    PubMed

    Munoz-Organero, Mario; Ruiz-Blazquez, Ramona

    2017-02-08

    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.

  20. A fast sequence assembly method based on compressed data structures.

    PubMed

    Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu

    2014-01-01

    Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.

  1. An Empirically Keyed Scale for Measuring Managerial Attitudes toward Women Executives.

    ERIC Educational Resources Information Center

    Dubno, Peter; And Others

    1979-01-01

    A scale (Managerial Attitudes toward Women Executives Scale -- MATWES) provides reliability and validity measures regarding managerial attitudes toward women executives. It employs a projective test for item generation and uses a panel of women executives as Q-sorters to select items. The Scale and its value in minimizing researcher bias in its…

  2. OpenSoC Fabric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-21

    Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less

  3. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  4. Large-Eddy Simulation of Internal Flow through Human Vocal Folds

    NASA Astrophysics Data System (ADS)

    Lasota, Martin; Šidlof, Petr

    2018-06-01

    The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.

  5. Information Management System Supporting a Multiple Property Survey Program with Legacy Radioactive Contamination.

    PubMed

    Stager, Ron; Chambers, Douglas; Wiatzka, Gerd; Dupre, Monica; Callough, Micah; Benson, John; Santiago, Erwin; van Veen, Walter

    2017-04-01

    The Port Hope Area Initiative is a project mandated and funded by the Government of Canada to remediate properties with legacy low-level radioactive waste contamination in the Town of Port Hope, Ontario. The management and use of large amounts of data from surveys of some 4800 properties is a significant task critical to the success of the project. A large amount of information is generated through the surveys, including scheduling individual field visits to the properties, capture of field data laboratory sample tracking, QA/QC, property report generation and project management reporting. Web-mapping tools were used to track and display temporal progress of various tasks and facilitated consideration of spatial associations of contamination levels. The IM system facilitated the management and integrity of the large amounts of information collected, evaluation of spatial associations, automated report reproduction and consistent application and traceable execution for this project.x. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Questionnaire-based assessment of executive functioning: Psychometrics.

    PubMed

    Castellanos, Irina; Kronenberger, William G; Pisoni, David B

    2018-01-01

    The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.

  7. Rule-Guided Executive Control of Response Inhibition: Functional Topography of the Inferior Frontal Cortex

    PubMed Central

    Cai, Weidong; Leung, Hoi-Chung

    2011-01-01

    Background The human inferior frontal cortex (IFC) is a large heterogeneous structure with distinct cytoarchitectonic subdivisions and fiber connections. It has been found involved in a wide range of executive control processes from target detection, rule retrieval to response control. Since these processes are often being studied separately, the functional organization of executive control processes within the IFC remains unclear. Methodology/Principal Findings We conducted an fMRI study to examine the activities of the subdivisions of IFC during the presentation of a task cue (rule retrieval) and during the performance of a stop-signal task (requiring response generation and inhibition) in comparison to a not-stop task (requiring response generation but not inhibition). We utilized a mixed event-related and block design to separate brain activity in correspondence to transient control processes from rule-related and sustained control processes. We found differentiation in control processes within the IFC. Our findings reveal that the bilateral ventral-posterior IFC/anterior insula are more active on both successful and unsuccessful stop trials relative to not-stop trials, suggesting their potential role in the early stage of stopping such as triggering the stop process. Direct countermanding seems to be outside of the IFC. In contrast, the dorsal-posterior IFC/inferior frontal junction (IFJ) showed transient activity in correspondence to the infrequent presentation of the stop signal in both tasks and the left anterior IFC showed differential activity in response to the task cues. The IFC subdivisions also exhibited similar but distinct patterns of functional connectivity during response control. Conclusions/Significance Our findings suggest that executive control processes are distributed across the IFC and that the different subdivisions of IFC may support different control operations through parallel cortico-cortical and cortico-striatal circuits. PMID:21673969

  8. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  9. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  10. A Web-Based Data-Querying Tool Based on Ontology-Driven Methodology and Flowchart-Based Model

    PubMed Central

    Ping, Xiao-Ou; Chung, Yufang; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-01-01

    Background Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. Objective The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. Methods The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. Results In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, “degree of liver damage,” “degree of liver damage when applying a mutually exclusive setting,” and “treatments for liver cancer”) was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. Conclusions The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks. PMID:25600078

  11. Shared prefetching to reduce execution skew in multi-threaded systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; Gunnels, John A

    Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less

  12. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  13. A bidirectional relationship between physical activity and executive function in older adults

    PubMed Central

    Daly, Michael; McMinn, David; Allan, Julia L.

    2015-01-01

    Physically active lifestyles contribute to better executive function. However, it is unclear whether high levels of executive function lead people to be more active. This study uses a large sample and multi-wave data to identify whether a reciprocal association exists between physical activity and executive function. Participants were 4555 older adults tracked across four waves of the English Longitudinal Study of Aging. In each wave executive function was assessed using a verbal fluency test and a letter cancelation task and participants reported their physical activity levels. Fixed effects regressions showed that changes in executive function corresponded with changes in physical activity. In longitudinal multilevel models low levels of physical activity led to subsequent declines in executive function. Importantly, poor executive function predicted reductions in physical activity over time. This association was found to be over 50% larger in magnitude than the contribution of physical activity to changes in executive function. This is the first study to identify evidence for a robust bidirectional link between executive function and physical activity in a large sample of older adults tracked over time. PMID:25628552

  14. New data model with better functionality for VLab

    NASA Astrophysics Data System (ADS)

    da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.

    2009-12-01

    The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.

  15. A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2006-01-01

    This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.

  16. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  17. Language Implications for Advertising in International Markets: A Model for Message Content and Message Execution.

    ERIC Educational Resources Information Center

    Beard, John; Yaprak, Attila

    A content analysis model for assessing advertising themes and messages generated primarily for United States markets to overcome barriers in the cultural environment of international markets was developed and tested. The model is based on three primary categories for generating, evaluating, and executing advertisements: rational, emotional, and…

  18. Conversion of the agent-oriented domain-specific language ALAS into JavaScript

    NASA Astrophysics Data System (ADS)

    Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana

    2016-06-01

    This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.

  19. A modular telerobotic task execution system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Tso, Kam S.; Hayati, Samad; Lee, Thomas S.

    1990-01-01

    A telerobot task execution system is proposed to provide a general parametrizable task execution capability. The system includes communication with the calling system, e.g., a task planning system, and single- and dual-arm sensor-based task execution with monitoring and reflexing. A specific task is described by specifying the parameters to various available task execution modules including trajectory generation, compliance control, teleoperation, monitoring, and sensor fusion. Reflex action is achieved by finding the corresponding reflex action in a reflex table when an execution event has been detected with a monitor.

  20. Survey of Command Execution Systems for NASA Spacecraft and Robots

    NASA Technical Reports Server (NTRS)

    Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich

    2005-01-01

    NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.

  1. Renewable Electricity Futures Study Executive Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Trieu; Sandor, Debra; Wiser, Ryan

    2012-12-01

    The Renewable Electricity Futures Study (RE Futures) provides an analysis of the grid integration opportunities, challenges, and implications of high levels of renewable electricity generation for the U.S. electric system. The study is not a market or policy assessment. Rather, RE Futures examines renewable energy resources and many technical issues related to the operability of the U.S. electricity grid, and provides initial answers to important questions about the integration of high penetrations of renewable electricity technologies from a national perspective. RE Futures results indicate that a future U.S. electricity system that is largely powered by renewable sources is possible andmore » that further work is warranted to investigate this clean generation pathway.« less

  2. Pegasus Workflow Management System: Helping Applications From Earth and Space

    NASA Astrophysics Data System (ADS)

    Mehta, G.; Deelman, E.; Vahi, K.; Silva, F.

    2010-12-01

    Pegasus WMS is a Workflow Management System that can manage large-scale scientific workflows across Grid, local and Cloud resources simultaneously. Pegasus WMS provides a means for representing the workflow of an application in an abstract XML form, agnostic of the resources available to run it and the location of data and executables. It then compiles these workflows into concrete plans by querying catalogs and farming computations across local and distributed computing resources, as well as emerging commercial and community cloud environments in an easy and reliable manner. Pegasus WMS optimizes the execution as well as data movement by leveraging existing Grid and cloud technologies via a flexible pluggable interface and provides advanced features like reusing existing data, automatic cleanup of generated data, and recursive workflows with deferred planning. It also captures all the provenance of the workflow from the planning stage to the execution of the generated data, helping scientists to accurately measure performance metrics of their workflow as well as data reproducibility issues. Pegasus WMS was initially developed as part of the GriPhyN project to support large-scale high-energy physics and astrophysics experiments. Direct funding from the NSF enabled support for a wide variety of applications from diverse domains including earthquake simulation, bacterial RNA studies, helioseismology and ocean modeling. Earthquake Simulation: Pegasus WMS was recently used in a large scale production run in 2009 by the Southern California Earthquake Centre to run 192 million loosely coupled tasks and about 2000 tightly coupled MPI style tasks on National Cyber infrastructure for generating a probabilistic seismic hazard map of the Southern California region. SCEC ran 223 workflows over a period of eight weeks, using on average 4,420 cores, with a peak of 14,540 cores. A total of 192 million files were produced totaling about 165TB out of which 11TB of data was saved. Astrophysics: The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses Pegasus WMS to search for binary inspiral gravitational waves. A month of LIGO data requires many thousands of jobs, running for days on hundreds of CPUs on the LIGO Data Grid (LDG) and Open Science Grid (OSG). Ocean Temperature Forecast: Researchers at the Jet Propulsion Laboratory are exploring Pegasus WMS to run ocean forecast ensembles of the California coastal region. These models produce a number of daily forecasts for water temperature, salinity, and other measures. Helioseismology: The Solar Dynamics Observatory (SDO) is NASA's most important solar physics mission of this coming decade. Pegasus WMS is being used to analyze the data from SDO, which will be predominantly used to learn about solar magnetic activity and to probe the internal structure and dynamics of the Sun with helioseismology. Bacterial RNA studies: SIPHT is an application in bacterial genomics, which predicts sRNA (small non-coding RNAs)-encoding genes in bacteria. This project currently provides a web-based interface using Pegasus WMS at the backend to facilitate large-scale execution of the workflows on varied resources and provide better notifications of task/workflow completion.

  3. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Ma, X; Singh, K

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less

  4. Artemis: Integrating Scientific Data on the Grid (Preprint)

    DTIC Science & Technology

    2004-07-01

    Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and

  5. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  6. Environment Modeling Using Runtime Values for JPF-Android

    NASA Technical Reports Server (NTRS)

    van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem

    2015-01-01

    Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.

  7. Feedback-Driven Dynamic Invariant Discovery

    NASA Technical Reports Server (NTRS)

    Zhang, Lingming; Yang, Guowei; Rungta, Neha S.; Person, Suzette; Khurshid, Sarfraz

    2014-01-01

    Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further re ne the set of candidate invariants. This feedback loop is executed until a x-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.

  8. Interactive design and analysis of future large spacecraft concepts

    NASA Technical Reports Server (NTRS)

    Garrett, L. B.

    1981-01-01

    An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented. Emphasis is on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures. Capabilities and performance of multidiscipline applications modules, the executive and data management software, and graphics display features are reviewed. A single user at an interactive terminal create, design, analyze, and conduct parametric studies of Earth orbiting spacecraft with relative ease. Data generated in the design, analysis, and performance evaluation of an Earth-orbiting large diameter antenna satellite are used to illustrate current capabilities. Computer run time statistics for the individual modules quantify the speed at which modeling, analysis, and design evaluation of integrated spacecraft concepts is accomplished in a user interactive computing environment.

  9. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  10. Planning for execution monitoring on a planetary rover

    NASA Technical Reports Server (NTRS)

    Gat, Erann; Firby, R. James; Miller, David P.

    1990-01-01

    A planetary rover will be traversing largely unknown and often unknowable terrain. In addition to geometric obstacles such as cliffs, rocks, and holes, it may also have to deal with non-geometric hazards such as soft soil and surface breakthroughs which often cannot be detected until rover is in imminent danger. Therefore, the rover must monitor its progress throughout a traverse, making sure to stay on course and to detect and act on any previously unseen hazards. Its onboard planning system must decide what sensors to monitor, what landmarks to take position readings from, and what actions to take if something should go wrong. The planning systems being developed for the Pathfinder Planetary Rover to perform these execution monitoring tasks are discussed. This system includes a network of planners to perform path planning, expectation generation, path analysis, sensor and reaction selection, and resource allocation.

  11. DALiuGE: A graph execution framework for harnessing the astronomical data deluge

    NASA Astrophysics Data System (ADS)

    Wu, C.; Tobar, R.; Vinsen, K.; Wicenec, A.; Pallot, D.; Lao, B.; Wang, R.; An, T.; Boulton, M.; Cooper, I.; Dodson, R.; Dolensky, M.; Mei, Y.; Wang, F.

    2017-07-01

    The Data Activated Liu Graph Engine - DALiuGE- is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both datasets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry datasets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.

  12. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  13. The Contributions of Working Memory and Executive Functioning to Problem Representation and Solution Generation in Algebraic Word Problems

    ERIC Educational Resources Information Center

    Lee, Kerry; Ng, Ee Lynn; Ng, Swee Fong

    2009-01-01

    Solving algebraic word problems involves multiple cognitive phases. The authors used a multitask approach to examine the extent to which working memory and executive functioning are associated with generating problem models and producing solutions. They tested 255 11-year-olds on working memory (Counting Recall, Letter Memory, and Keep Track),…

  14. Malware detection and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Ken; Lloyd, Levi; Crussell, Jonathan

    Embodiments of the invention describe systems and methods for malicious software detection and analysis. A binary executable comprising obfuscated malware on a host device may be received, and incident data indicating a time when the binary executable was received and identifying processes operating on the host device may be recorded. The binary executable is analyzed via a scalable plurality of execution environments, including one or more non-virtual execution environments and one or more virtual execution environments, to generate runtime data and deobfuscation data attributable to the binary executable. At least some of the runtime data and deobfuscation data attributable tomore » the binary executable is stored in a shared database, while at least some of the incident data is stored in a private, non-shared database.« less

  15. The Action Execution Process Implemented in Different Cognitive Architectures: A Review

    NASA Astrophysics Data System (ADS)

    Dong, Daqi; Franklin, Stan

    2014-12-01

    An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.

  16. "Opening an emotional dimension in me": changes in emotional reactivity and emotion regulation in a case of executive impairment after left fronto-parietal damage.

    PubMed

    Salas, Christian E; Radovic, Darinka; Yuen, Kenneth S L; Yeates, Giles N; Castro, O; Turnbull, Oliver H

    2014-01-01

    Dysexecutive impairment is a common problem after brain injury, particularly after damage to the lateral surface of the frontal lobes. There is a large literature describing the cognitive deficits associated with executive impairment after dorsolateral damage; however, little is known about its impact on emotional functioning. This case study describes changes in a 72-year-old man (Professor F) who became markedly dysexecutive after a left fron-to-parietal stroke. Professor F's case is remarkable in that, despite exhibiting typical executive impairments, abstraction and working memory capacities were spared. Such preservation of insight-related capacities allowed him to offer a detailed account of his emotional changes. Quantitative and qualitative tools were used to explore changes in several well-known emotional processes. The results suggest that Professor F's two main emotional changes were in the domain of emotional reactivity (increased experience of both positive and negative emotions) and emotion regulation (down-regulation of sadness). Professor F related both changes to difficulties in his thinking process, especially a difficulty generating and manipulating thoughts during moments of negative arousal. These results are discussed in relation to the literature on executive function and emotion regulation. The relevance of these findings for neuropsychological rehabilitation and for the debate on the neural basis of emotional processes is addressed.

  17. A very simple, re-executable neuroimaging publication

    PubMed Central

    Ghosh, Satrajit S.; Poline, Jean-Baptiste; Keator, David B.; Halchenko, Yaroslav O.; Thomas, Adam G.; Kessler, Daniel A.; Kennedy, David N.

    2017-01-01

    Reproducible research is a key element of the scientific process. Re-executability of neuroimaging workflows that lead to the conclusions arrived at in the literature has not yet been sufficiently addressed and adopted by the neuroimaging community. In this paper, we document a set of procedures, which include supplemental additions to a manuscript, that unambiguously define the data, workflow, execution environment and results of a neuroimaging analysis, in order to generate a verifiable re-executable publication. Re-executability provides a starting point for examination of the generalizability and reproducibility of a given finding. PMID:28781753

  18. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  19. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Enhancing Cross-functional Collaboration and Effective Problem Solving Through an Innovation Challenge for Point-of-Care Providers.

    PubMed

    Bakallbashi, Eni; Vyas, Anjali; Vaswani, Nikita; Rosales, David; Russell, David; Dowding, Dawn; Bernstein, Michael; Abdelaal, Hany; Hawkey, Regina

    2015-01-01

    An internal employee challenge competition is a way to promote staff engagement and generate innovative business solutions. This Spotlight on Leadership focuses on the approach that a large not-for-profit healthcare organization, the Visiting Nurse Service of New York, took in designing and executing an innovation challenge. The challenge leveraged internal staff expertise and promoted wide participation. This model is 1 that can be replicated by organizations as leaders work to engage employees at the point of service in organization-wide problem solving.

  1. High-throughput bioinformatics with the Cyrille2 pipeline system

    PubMed Central

    Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ

    2008-01-01

    Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742

  2. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  3. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  4. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  5. Altered caudate connectivity is associated with executive dysfunction after traumatic brain injury

    PubMed Central

    De Simoni, Sara; Jenkins, Peter O; Bourke, Niall J; Fleminger, Jessica J; Jolly, Amy E; Patel, Maneesh C; Leech, Robert; Sharp, David J

    2018-01-01

    Abstract Traumatic brain injury often produces executive dysfunction. This characteristic cognitive impairment often causes long-term problems with behaviour and personality. Frontal lobe injuries are associated with executive dysfunction, but it is unclear how these injuries relate to corticostriatal interactions that are known to play an important role in behavioural control. We hypothesized that executive dysfunction after traumatic brain injury would be associated with abnormal corticostriatal interactions, a question that has not previously been investigated. We used structural and functional MRI measures of connectivity to investigate this. Corticostriatal functional connectivity in healthy individuals was initially defined using a data-driven approach. A constrained independent component analysis approach was applied in 100 healthy adult dataset from the Human Connectome Project. Diffusion tractography was also performed to generate white matter tracts. The output of this analysis was used to compare corticostriatal functional connectivity and structural integrity between groups of 42 patients with traumatic brain injury and 21 age-matched controls. Subdivisions of the caudate and putamen had distinct patterns of functional connectivity. Traumatic brain injury patients showed disruption to functional connectivity between the caudate and a distributed set of cortical regions, including the anterior cingulate cortex. Cognitive impairments in the patients were mainly seen in processing speed and executive function, as well as increased levels of apathy and fatigue. Abnormalities of caudate functional connectivity correlated with these cognitive impairments, with reductions in right caudate connectivity associated with increased executive dysfunction, information processing speed and memory impairment. Structural connectivity, measured using diffusion tensor imaging between the caudate and anterior cingulate cortex was impaired and this also correlated with measures of executive dysfunction. We show for the first time that altered subcortical connectivity is associated with large-scale network disruption in traumatic brain injury and that this disruption is related to the cognitive impairments seen in these patients. PMID:29186356

  6. [Working memory and executive control: inhibitory processes in updating and random generation tasks].

    PubMed

    Macizo, Pedro; Bajo, Teresa; Soriano, Maria Felipa

    2006-02-01

    Working Memory (WM) span predicts subjects' performance in control executive tasks and, in addition, it has been related to the capacity to inhibit irrelevant information. In this paper we investigate the role of WM span in two executive tasks focusing our attention on inhibitory components of both tasks. High and low span participants recalled targets words rejecting irrelevant items at the same time (Experiment 1) and they generated random numbers (Experiment 2). Results showed a clear relation between WM span and performance in both tasks. In addition, analyses of intrusion errors (Experiment 1) and stereotyped responses (Experiment 2) indicated that high span individuals were able to efficiently use the inhibitory component implied in both tasks. The pattern of data provides support to the relation between WM span and control executive tasks through an inhibitory mechanism.

  7. ForTrilinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Johnson, Seth R; Prokopenko, Andrey V

    'ForTrilinos' is related to The Trilinos Project, which contains a large and growing collection of solver capabilities that can utilize next-generation platforms, in particular scalable multicore, manycore, accelerator and heterogeneous systems. Trilinos is primarily written in C++, including its user interfaces. While C++ is advantageous for gaining access to the latest programming environments, it limits Trilinos usage via Fortran. Sever ad hoc translation interfaces exist to enable Fortran usage of Trilinos, but none of these interfaces is general-purpose or written for reusable and sustainable external use. 'ForTrilinos' provides a seamless pathway for large and complex Fortran-based codes to access Trilinosmore » without C/C++ interface code. This access includes Fortran versions of Kokkos abstractions for code execution and data management.« less

  8. The neural component-process architecture of endogenously generated emotion

    PubMed Central

    Kanske, Philipp; Singer, Tania

    2017-01-01

    Abstract Despite the ubiquity of endogenous emotions and their role in both resilience and pathology, the processes supporting their generation are largely unknown. We propose a neural component process model of endogenous generation of emotion (EGE) and test it in two functional magnetic resonance imaging (fMRI) experiments (N = 32/293) where participants generated and regulated positive and negative emotions based on internal representations, usin self-chosen generation methods. EGE activated nodes of salience (SN), default mode (DMN) and frontoparietal control (FPCN) networks. Component processes implemented by these networks were established by investigating their functional associations, activation dynamics and integration. SN activation correlated with subjective affect, with midbrain nodes exclusively distinguishing between positive and negative affect intensity, showing dynamics consistent generation of core affect. Dorsomedial DMN, together with ventral anterior insula, formed a pathway supporting multiple generation methods, with activation dynamics suggesting it is involved in the generation of elaborated experiential representations. SN and DMN both coupled to left frontal FPCN which in turn was associated with both subjective affect and representation formation, consistent with FPCN supporting the executive coordination of the generation process. These results provide a foundation for research into endogenous emotion in normal, pathological and optimal function. PMID:27522089

  9. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  10. Binary Disassembly Block Coverage by Symbolic Execution vs. Recursive Descent

    DTIC Science & Technology

    2012-03-01

    explores the effectiveness of symbolic execution on packed or obfuscated samples of the same binaries to generate a model-based evaluation of success...24 2.3.4.1 Packing . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.4.2 Techniques...inner workings of UPX (Universal Packer for eXecutables), a common packing tool, on a Windows binary. Image source: GFC08 . . . . . . . . . . . 25 3.1

  11. Acute hypoglycemia impairs executive cognitive function in adults with and without type 1 diabetes.

    PubMed

    Graveling, Alex J; Deary, Ian J; Frier, Brian M

    2013-10-01

    Acute hypoglycemia impairs cognitive function in several domains. Executive cognitive function governs organization of thoughts, prioritization of tasks, and time management. This study examined the effect of acute hypoglycemia on executive function in adults with and without diabetes. Thirty-two adults with and without type 1 diabetes with no vascular complications or impaired awareness of hypoglycemia were studied. Two hyperinsulinemic glucose clamps were performed at least 2 weeks apart in a single-blind, counterbalanced order, maintaining blood glucose at 4.5 mmol/L (euglycemia) or 2.5 mmol/L (hypoglycemia). Executive functions were assessed with a validated test suite (Delis-Kaplan Executive Function). A general linear model (repeated-measures ANOVA) was used. Glycemic condition (euglycemia or hypoglycemia) was the within-participant factor. Between-participant factors were order of session (euglycemia-hypoglycemia or hypoglycemia-euglycemia), test battery used, and diabetes status (with or without diabetes). Compared with euglycemia, executive functions (with one exception) were significantly impaired during hypoglycemia; lower test scores were recorded with more time required for completion. Large Cohen d values (>0.8) suggest that hypoglycemia induces decrements in aspects of executive function with large effect sizes. In some tests, the performance of participants with diabetes was more impaired than those without diabetes. Executive cognitive function, which is necessary to carry out many everyday activities, is impaired during hypoglycemia in adults with and without type 1 diabetes. This important aspect of cognition has not received previous systematic study with respect to hypoglycemia. The effect size is large in terms of both accuracy and speed.

  12. Executive roundtable on coal-fired generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2009-09-15

    Power Engineering magazine invited six industry executives from the coal-fired sector to discuss issues affecting current and future prospects of coal-fired generation. The executives are Tim Curran, head of Alstom Power for the USA and Senior Vice President and General Manager of Boilers North America; Ray Kowalik, President and General Manager of Burns and McDonnell Energy Group; Jeff Holmstead, head of Environmental Strategies for the Bracewell Giuliani law firm; Jim Mackey, Vice President, Fluor Power Group's Solid Fuel business line; Tom Shelby, President Kiewit Power Inc., and David Wilks, President of Energy Supply for Excel Energy Group. Steve Blankinship, themore » magazine's Associate Editor, was the moderator. 6 photos.« less

  13. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.

  14. Making sense of executive sensemaking. A phenomenological case study with methodological criticism.

    PubMed

    Parry, Jonathan

    2003-01-01

    This paper attempts to answer the research question, "how do senior executives in my organisation make sense of their professional life?" Having reviewed the sensemaking literature, in particular that of the pre-eminent author in this field, Karl E. Weick, I adopt a phenomenological, interpretist orientation which relies on an ideographic, inductive generation of theory. I situate myself, both as researcher and chief executive of the organisation studied, in the narrative of sensemaking. Using semi-structured interviews and a combination of grounded theory and template analysis to generate categories, seven themes of sensemaking are tentatively produced which are then compared with Weick's characteristics. The methodological approach is then reflected on, criticised and alternative methodologies are briefly considered. The conclusion reached is that the themes generated by the research may have relevance for sensemaking processes, but that the production of formal theory through social research is problematic.

  15. Creative constraints: Brain activity and network dynamics underlying semantic interference during idea production.

    PubMed

    Beaty, Roger E; Christensen, Alexander P; Benedek, Mathias; Silvia, Paul J; Schacter, Daniel L

    2017-03-01

    Functional neuroimaging research has recently revealed brain network interactions during performance on creative thinking tasks-particularly among regions of the default and executive control networks-but the cognitive mechanisms related to these interactions remain poorly understood. Here we test the hypothesis that the executive control network can interact with the default network to inhibit salient conceptual knowledge (i.e., pre-potent responses) elicited from memory during creative idea production. Participants studied common noun-verb pairs and were given a cued-recall test with corrective feedback to strengthen the paired association in memory. They then completed a verb generation task that presented either a previously studied noun (high-constraint) or an unstudied noun (low-constraint), and were asked to "think creatively" while searching for a novel verb to relate to the presented noun. Latent Semantic Analysis of verbal responses showed decreased semantic distance values in the high-constraint (i.e., interference) condition, which corresponded to increased neural activity within regions of the default (posterior cingulate cortex and bilateral angular gyri), salience (right anterior insula), and executive control (left dorsolateral prefrontal cortex) networks. Independent component analysis of intrinsic functional connectivity networks extended this finding by revealing differential interactions among these large-scale networks across the task conditions. The results suggest that interactions between the default and executive control networks underlie response inhibition during constrained idea production, providing insight into specific neurocognitive mechanisms supporting creative cognition. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. More effective wet turboexpander for the nuclotron helium refrigerators

    NASA Astrophysics Data System (ADS)

    Agapov, N. N.; Batin, V. I.; Davydov, A. B.; Khodzhibagian, H. G.; Kovalenko, A. D.; Perestoronin, G. A.; Sergeev, I. I.; Stulov, V. L.; Udut, V. N.

    2002-05-01

    In order to raise the efficiency of cryogenic refrigerators and liquefiers, it is very important to replace the JT process, which involves large losses of exergy, by the improved process of adiabatic expansion. This paper presents test results of the second-generation wet turboexpander for the Nuclotron helium refrigerators. A rotor is fixed vertically by a combination of gas and hydrostatic oil bearings. The turbines are capable to operate at a speed of 300,000 revolutions per minute. The power generated by the turbine goes into friction in the oil bearings. The design of the new wet turboexpander was executed in view of those specific conditions, which arise due to the operation at liquid helium temperature. The application of this new expansion machine increases the efficiency of the Nuclotron helium refrigerators by 25%.

  17. MoNET: media over net gateway processor for next-generation network

    NASA Astrophysics Data System (ADS)

    Elabd, Hammam; Sundar, Rangarajan; Dedes, John

    2001-12-01

    MoNETTM (Media over Net) SX000 product family is designed using a scalable voice, video and packet-processing platform to address applications with channel densities from few voice channels to four OC3 per card. This platform is developed for bridging public circuit-switched network to the next generation packet telephony and data network. The platform consists of a DSP farm, RISC processors and interface modules. DSP farm is required to execute voice compression, image compression and line echo cancellation algorithms for large number of voice, video, fax, and modem or data channels. RISC CPUs are used for performing various packetizations based on RTP, UDP/IP and ATM encapsulations. In addition, RISC CPUs also participate in the DSP farm load management and communication with the host and other MoP devices. The MoNETTM S1000 communications device is designed for voice processing and for bridging TDM to ATM and IP packet networks. The S1000 consists of the DSP farm based on Carmel DSP core and 32-bit RISC CPU, along with Ethernet, Utopia, PCI, and TDM interfaces. In this paper, we will describe the VoIP infrastructure, building blocks of the S500, S1000 and S3000 devices, algorithms executed on these device and associated channel densities, detailed DSP architecture, memory architecture, data flow and scheduling.

  18. The standard-based open workflow system in GeoBrain (Invited)

    NASA Astrophysics Data System (ADS)

    Di, L.; Yu, G.; Zhao, P.; Deng, M.

    2013-12-01

    GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.

  19. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  20. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  1. Flexible, secure agent development framework

    DOEpatents

    Goldsmith,; Steven, Y [Rochester, MN

    2009-04-07

    While an agent generator is generating an intelligent agent, it can also evaluate the data processing platform on which it is executing, in order to assess a risk factor associated with operation of the agent generator on the data processing platform. The agent generator can retrieve from a location external to the data processing platform an open site that is configurable by the user, and load the open site into an agent substrate, thereby creating a development agent with code development capabilities. While an intelligent agent is executing a functional program on a data processing platform, it can also evaluate the data processing platform to assess a risk factor associated with performing the data processing function on the data processing platform.

  2. The Development of Executive Functions and Early Mathematics: A Dynamic Relationship

    ERIC Educational Resources Information Center

    Van der Ven, Sanne H. G.; Kroesbergen, Evelyn H.; Boom, Jan; Leseman, Paul P. M.

    2012-01-01

    Background: The relationship between executive functions and mathematical skills has been studied extensively, but results are inconclusive, and how this relationship evolves longitudinally is largely unknown. Aim: The aim was to investigate the factor structure of executive functions in inhibition, shifting, and updating; the longitudinal…

  3. Executive Function Predicts Artificial Language Learning in Children and Adults

    ERIC Educational Resources Information Center

    Kapa, Leah Lynn

    2013-01-01

    Prior research has established an executive function advantage among bilinguals as compared to monolingual peers. These non-linguistic cognitive advantages are largely assumed to result from the experience of managing two linguistic systems. However, the possibility remains that the relationship between bilingualism and executive function is…

  4. Automatic Overset Grid Generation with Heuristic Feedback Control

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.

    2001-01-01

    An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.

  5. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  6. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  7. Solar technical assistance provided to Forest City military communities in Hawaii for incorporation of 20-30 MW of solar energy generation to power family housing for US Navy personnel.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dominick, Jeff; Merrigan, Tim; Boudra, Will

    2010-06-01

    In May 2007, Forest City Military Communities won a US Department of Energy Solar America Showcase Award. As part of this award, executives and staff from Forest City Military Communities worked side-by-side with a DOE technical assistance team to overcome technical obstacles encountered by this large-scale real estate developer and manager. This paper describes the solar technical assistance that was provided and the key solar experiences acquired by Forest City Military Communities over an 18 month period.

  8. Effect of olympic weight category on performance in the roundhouse kick to the head in taekwondo.

    PubMed

    Estevan, Isaac; Falco, Coral; Alvarez, Octavio; Molina-García, Javier

    2012-03-01

    In taekwondo, kick performance is generally measured using impact force and time. This study aimed to analyse performance in the roundhouse kick to the head according to execution distance between and within Olympic weight categories. The participants were 36 male athletes divided into three categories: featherweight (n = 10), welterweight (n = 15) and heavyweight (n = 11). Our results show that taekwondo athletes in all weight categories generate a similar relative impact force. However, the results indicate that weight has a large impact on kick performance, particularly in relation to total response time.

  9. Effect of Olympic Weight Category on Performance in the Roundhouse Kick to the Head in Taekwondo

    PubMed Central

    Estevan, Isaac; Falco, Coral; Álvarez, Octavio; Molina-García, Javier

    2012-01-01

    In taekwondo, kick performance is generally measured using impact force and time. This study aimed to analyse performance in the roundhouse kick to the head according to execution distance between and within Olympic weight categories. The participants were 36 male athletes divided into three categories: featherweight (n = 10), welterweight (n = 15) and heavyweight (n = 11). Our results show that taekwondo athletes in all weight categories generate a similar relative impact force. However, the results indicate that weight has a large impact on kick performance, particularly in relation to total response time. PMID:23486074

  10. RNAi Screening in Spodoptera frugiperda.

    PubMed

    Ghosh, Subhanita; Singh, Gatikrushna; Sachdev, Bindiya; Kumar, Ajit; Malhotra, Pawan; Mukherjee, Sunil K; Bhatnagar, Raj K

    2016-01-01

    RNA interference is a potent and precise reverse genetic approach to carryout large-scale functional genomic studies in a given organism. During the past decade, RNAi has also emerged as an important investigative tool to understand the process of viral pathogenesis. Our laboratory has successfully generated transgenic reporter and RNAi sensor line of Spodoptera frugiperda (Sf21) cells and developed a reversal of silencing assay via siRNA or shRNA guided screening to investigate RNAi factors or viral pathogenic factors with extraordinary fidelity. Here we describe empirical approaches and conceptual understanding to execute successful RNAi screening in Spodoptera frugiperda 21-cell line.

  11. CEOS Theory: A Comprehensive Approach to Understanding Hard to Maintain Behaviour Change.

    PubMed

    Borland, Ron

    2017-03-01

    This paper provides a brief introduction to CEOS theory, a comprehensive theory for understanding hard to maintain behaviour change. The name CEOS is an acronym for Context, Executive, and Operational Systems theory. Behaviour is theorised to be the result of the moment by moment interaction between internal needs (operational processes) in relation to environmental conditions, and for humans this is augmented by goal-directed, executive action which can transcend immediate contingencies. All behaviour is generated by operational processes. Goal-directed behaviours only triumph over contingency-generated competing behaviours when operational processes have been sufficiently activated to support them. Affective force can be generated around executive system (ES) goals from such things as memories of direct experience, vicarious experience, and emotionally charged communications mediated through stories the person generates. This paper makes some refinements and elaborations of the theory, particularly around the role of feelings, and of the importance of stories and scripts for facilitating executive action. It also sketches out how it reconceptualises a range of issues relevant to behaviour change. CEOS provides a framework for understanding the limitations of both informational and environmental approaches to behaviour change, the need for self-regulatory strategies and for taking into account more basic aspects of human functioning. © 2016 The Authors. Applied Psychology: Health and Well-Being published by John Wiley & Sons Ltd on behalf of International Association of Applied Psychology.

  12. OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Chan, William M.

    2012-01-01

    Structured grid solvers such as NASA's OVERFLOW compressible Navier-Stokes flow solver can generate large data files that contain convergence histories for flow equation residuals, turbulence model equation residuals, component forces and moments, and component relative motion dynamics variables. Most of today's large-scale problems can extend to hundreds of grids, and over 100 million grid points. However, due to the lack of efficient tools, only a small fraction of information contained in these files is analyzed. OVERSMART (OVERFLOW Solution Monitoring And Reporting Tool) provides a comprehensive report of solution convergence of flow computations over large, complex grid systems. It produces a one-page executive summary of the behavior of flow equation residuals, turbulence model equation residuals, and component forces and moments. Under the automatic option, a matrix of commonly viewed plots such as residual histograms, composite residuals, sub-iteration bar graphs, and component forces and moments is automatically generated. Specific plots required by the user can also be prescribed via a command file or a graphical user interface. Output is directed to the user s computer screen and/or to an html file for archival purposes. The current implementation has been targeted for the OVERFLOW flow solver, which is used to obtain a flow solution on structured overset grids. The OVERSMART framework allows easy extension to other flow solvers.

  13. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  14. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  15. Identifying Executable Plans

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Jonsson, Ari K.; Frank, Jeremy D.; McGann, Conor

    2003-01-01

    Generating plans for execution imposes a different set of requirements on the planning process than those imposed by planning alone. In highly unpredictable execution environments, a fully-grounded plan may become inconsistent frequently when the world fails to behave as expected. Intelligent execution permits making decisions when the most up-to-date information is available, ensuring fewer failures. Planning should acknowledge the capabilities of the execution system, both to ensure robust execution in the face of uncertainty, which also relieves the planner of the burden of making premature commitments. We present Plan Identification Functions (PIFs), which formalize what it means for a plan to be executable, md are used in conjunction with a complete model of system behavior to halt the planning process when an executable plan is found. We describe the implementation of plan identification functions for a temporal, constraint-based planner. This particular implementation allows the description of many different plan identification functions. characteristics crf the xectieonfvii rnm-enft,h e best plan to hand to the execution system will contain more or less commitment and information.

  16. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  17. 3 CFR 13617 - Executive Order 13617 of June 25, 2012. Blocking Property of the Government of the Russian...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Extracted From Nuclear Weapons 13617 Order 13617 Presidential Documents Executive Orders Executive Order... to the Disposition of Highly Enriched Uranium Extracted From Nuclear Weapons By the authority vested... accumulation of a large volume of weapons-usable fissile material in the territory of the Russian Federation...

  18. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  19. A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Hwa-Soo

    1991-01-01

    A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.

  20. Dissociation in undergraduate students: disruptions in executive functioning.

    PubMed

    Giesbrecht, Timo; Merckelbach, Harald; Geraerts, Elke; Smeets, Ellen

    2004-08-01

    The concept of dissociation refers to disruptions in attentional control. Attentional control is an executive function. Few studies have addressed the link between dissociation and executive functioning. Our study investigated this relationship in a sample of undergraduate students (N = 185) who completed the Dissociative Experiences Scale and the Random Number Generation Task. We found that minor disruptions in executive functioning were related to a subclass of dissociative experiences, notably dissociative amnesia and the Dissociative Experiences Scale Taxon. However, the two other subscales of the Dissociative Experiences Scale, measuring depersonalization and absorption, were unrelated to executive functioning. Our findings suggest that a failure to inhibit previous responses might contribute to the pathological memory manifestations of dissociation.

  1. Designing an Easy-to-use Executive Conference Room Control System

    NASA Astrophysics Data System (ADS)

    Back, Maribeth; Golovchinsky, Gene; Qvarfordt, Pernilla; van Melle, William; Boreczky, John; Dunnigan, Tony; Carter, Scott

    The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional, next-generation conference rooms. Our first design prototype focuses on creating a “no wizards” room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. As the system needs to fit both with the personal leadership style of the executive and the corporation’s meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive.

  2. Enabling large-scale next-generation sequence assembly with Blacklight

    PubMed Central

    Couger, M. Brian; Pipes, Lenore; Squina, Fabio; Prade, Rolf; Siepel, Adam; Palermo, Robert; Katze, Michael G.; Mason, Christopher E.; Blood, Philip D.

    2014-01-01

    Summary A variety of extremely challenging biological sequence analyses were conducted on the XSEDE large shared memory resource Blacklight, using current bioinformatics tools and encompassing a wide range of scientific applications. These include genomic sequence assembly, very large metagenomic sequence assembly, transcriptome assembly, and sequencing error correction. The data sets used in these analyses included uncategorized fungal species, reference microbial data, very large soil and human gut microbiome sequence data, and primate transcriptomes, composed of both short-read and long-read sequence data. A new parallel command execution program was developed on the Blacklight resource to handle some of these analyses. These results, initially reported previously at XSEDE13 and expanded here, represent significant advances for their respective scientific communities. The breadth and depth of the results achieved demonstrate the ease of use, versatility, and unique capabilities of the Blacklight XSEDE resource for scientific analysis of genomic and transcriptomic sequence data, and the power of these resources, together with XSEDE support, in meeting the most challenging scientific problems. PMID:25294974

  3. Plan Execution Interchange Language (PLEXIL)

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Jonsson, Ari; Pasareanu, Corina; Simmons, Reid; Tso, Kam; Verma, Vandi

    2006-01-01

    Plan execution is a cornerstone of spacecraft operations, irrespective of whether the plans to be executed are generated on board the spacecraft or on the ground. Plan execution frameworks vary greatly, due to both different capabilities of the execution systems, and relations to associated decision-making frameworks. The latter dependency has made the reuse of execution and planning frameworks more difficult, and has all but precluded information sharing between different execution and decision-making systems. As a step in the direction of addressing some of these issues, a general plan execution language, called the Plan Execution Interchange Language (PLEXIL), is being developed. PLEXIL is capable of expressing concepts used by many high-level automated planners and hence provides an interface to multiple planners. PLEXIL includes a domain description that specifies command types, expansions, constraints, etc., as well as feedback to the higher-level decision-making capabilities. This document describes the grammar and semantics of PLEXIL. It includes a graphical depiction of this grammar and illustrative rover scenarios. It also outlines ongoing work on implementing a universal execution system, based on PLEXIL, using state-of-the-art rover functional interfaces and planners as test cases.

  4. Helicopter In-Flight Monitoring System Second Generation (HIMS II).

    DTIC Science & Technology

    1983-08-01

    acquisition cycle. B. Computer Chassis CPU (DEC LSI-II/2) -- Executes instructions contained in the memory. 32K memory (DEC MSVII-DD) --Contains program...when the operator executes command #2, 3, or 5 (display data). New cartridges can be inserted as required for truly unlimited, continuous data...is called bootstrapping. The software, which is stored on a tape cartridge, is loaded into memory by execution of a small program stored in read-only

  5. Robust prediction of individual creative ability from brain functional connectivity.

    PubMed

    Beaty, Roger E; Kenett, Yoed N; Christensen, Alexander P; Rosenberg, Monica D; Benedek, Mathias; Chen, Qunlin; Fink, Andreas; Qiu, Jiang; Kwapil, Thomas R; Kane, Michael J; Silvia, Paul J

    2018-01-30

    People's ability to think creatively is a primary means of technological and cultural progress, yet the neural architecture of the highly creative brain remains largely undefined. Here, we employed a recently developed method in functional brain imaging analysis-connectome-based predictive modeling-to identify a brain network associated with high-creative ability, using functional magnetic resonance imaging (fMRI) data acquired from 163 participants engaged in a classic divergent thinking task. At the behavioral level, we found a strong correlation between creative thinking ability and self-reported creative behavior and accomplishment in the arts and sciences ( r = 0.54). At the neural level, we found a pattern of functional brain connectivity related to high-creative thinking ability consisting of frontal and parietal regions within default, salience, and executive brain systems. In a leave-one-out cross-validation analysis, we show that this neural model can reliably predict the creative quality of ideas generated by novel participants within the sample. Furthermore, in a series of external validation analyses using data from two independent task fMRI samples and a large task-free resting-state fMRI sample, we demonstrate robust prediction of individual creative thinking ability from the same pattern of brain connectivity. The findings thus reveal a whole-brain network associated with high-creative ability comprised of cortical hubs within default, salience, and executive systems-intrinsic functional networks that tend to work in opposition-suggesting that highly creative people are characterized by the ability to simultaneously engage these large-scale brain networks.

  6. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  7. The Relationship of Self-reported Executive Functioning to Suicide Ideation and Attempts: Findings from a Large U.S.-based Online Sample.

    PubMed

    Saffer, Boaz Y; Klonsky, E David

    2017-01-01

    An increasing number of studies demonstrate that individuals with a history of suicidality exhibit impaired executive functioning abilities. The current study examines whether these differences are linked to suicidal thoughts or suicidal acts-a crucial distinction given that most people who think about suicide will not act on their thoughts. A large online sample of U.S. participants with a history of suicide ideation (n = 197), suicide attempts (n = 166), and no suicidality (n = 180) completed self-report measures assessing executive functioning, suicide ideation and attempts; in addition, depression, self-efficacy, and history of drug abuse and brain injury were assessed as potential covariates. Individuals with recent suicide attempts reported significantly worse executive functioning than ideators. This difference was not accounted for by depression, self-efficacy, history of drug abuse or brain injury. Self-reported executive functioning may represent an important short-term risk factor for suicide attempts.

  8. The Very Large Array Data Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an international consortium of scientists and software developers based at the National Radio Astronomical Observatory (NRAO), the European Southern Observatory (ESO), and the National Astronomical Observatory of Japan (NAOJ).

  9. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  10. Identification of a Competency Model for the Recruitment, Retention & Development of Intermodal Transportation

    DOT National Transportation Integrated Search

    2009-09-30

    The objective of this research project will be to develop a detailed and valid model of the managerial and executive competencies that are need to recruit assess train and develop the next generation of executives and managers that will guide the int...

  11. Defense Horizons. Privatizing While Transforming. July 2007, Number 57

    DTIC Science & Technology

    2007-07-01

    accountability and separation of powers . Regarding the first, the more privatization is used, the greater the distance between both executive and...far end of that issue is a separation of powers question related to executive accountability to Congress. Just as the war on terror is generating

  12. Intraindividual Differences in Executive Functions during Childhood: The Role of Emotions

    ERIC Educational Resources Information Center

    Pnevmatikos, Dimitris; Trikkaliotis, Ioannis

    2013-01-01

    Intraindividual differences in executive functions (EFs) have been rarely investigated. In this study, we addressed the question of whether the emotional fluctuations that schoolchildren experience in their classroom settings could generate substantial intraindividual differences in their EFs and, more specifically, in the fundamental unifying…

  13. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  14. Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages

    NASA Astrophysics Data System (ADS)

    Williams, Kevin; McCandless, Jason; Gregg, David

    In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.

  15. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  16. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  17. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, M.T.

    This article is a compilation of the views of the changing power generation equipment market by executives of ASEA-Brown Boveri, General Electric Power Generation, Siemans Power Generation Group, and Westinghouse Electric Corporation Power Generation unit. The topics of the article include a changing market, the home market, the turnkey supplier, and back to baseload.

  19. 76 FR 54143 - Airworthiness Directives; Turbomeca Arriel 1B Turboshaft Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... & Propeller Directorate, 12 New England Executive Park, Burlington, MA 01803; phone: 781-238-7772; fax: 781... rotor; and (2) The free rotation of the gas generator rotor; and (3) No grinding noise during the... Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 12 New England Executive Park...

  20. The Political Communication of Strategic Nuclear Policy.

    ERIC Educational Resources Information Center

    Camden, Carl; Martin, Janet

    A study of the different perceptual frameworks of the major parties involved in strategic nuclear policy was conducted by examining the interaction between the Executive Branch, Congress, and the informed public. Public political communication data were gathered from public documents generated by Congress and the Executive branch, and by examining…

  1. INITIATE: An Intelligent Adaptive Alert Environment.

    PubMed

    Jafarpour, Borna; Abidi, Samina Raza; Ahmad, Ahmad Marwan; Abidi, Syed Sibte Raza

    2015-01-01

    Exposure to a large volume of alerts generated by medical Alert Generating Systems (AGS) such as drug-drug interaction softwares or clinical decision support systems over-whelms users and causes alert fatigue in them. Some of alert fatigue effects are ignoring crucial alerts and longer response times. A common approach to avoid alert fatigue is to devise mechanisms in AGS to stop them from generating alerts that are deemed irrelevant. In this paper, we present a novel framework called INITIATE: an INtellIgent adapTIve AlerT Environment to avoid alert fatigue by managing alerts generated by one or more AGS. We have identified and categories the lifecycle of different alerts and have developed alert management logic as per the alerts' lifecycle. Our framework incorporates an ontology that represents the alert management strategy and an alert management engine that executes this strategy. Our alert management framework offers the following features: (1) Adaptability based on users' feedback; (2) Personalization and aggregation of messages; and (3) Connection to Electronic Medical Records by implementing a HL7 Clinical Document Architecture parser.

  2. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  3. Racial and Socioeconomic Gaps in Executive Function Skills in Early Elementary School: Nationally Representative Evidence From the ECLS-K:2011

    ERIC Educational Resources Information Center

    Little, Michael

    2017-01-01

    This brief leverages the first ever nationally representative data set with a direct assessment of elementary school-aged children's executive function skills to examine racial and socioeconomic gaps in performance. The analysis reveals large gaps in measures of working memory and cognitive flexibility, the two components of executive function…

  4. Using AI Planning Techniques to Automatically Generate Image Processing Procedures: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Chien, S.

    1994-01-01

    This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.

  5. Development of a High-Fidelity Simulation Environment for Shadow-Mode Assessments of Air Traffic Concepts

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Robinson, John E.; Lai, Chok Fung

    2017-01-01

    This paper will describe the purpose, architecture, and implementation of a gate-to-gate, high-fidelity air traffic simulation environment called the Shadow Mode Assessment using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed.The overarching purpose of the SMART-NAS Test Bed (SNTB) is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the Next Generation Air Transportation System of the United States, called NextGen. SNTB is intended to enable simulations that are currently impractical or impossible for three major areas of NextGen research and development: Concepts across multiple operational domains such as the gate-to-gate trajectory-based operations concept; Concepts related to revolutionary operations such as the seamless and widespread integration of large and small Unmanned Aerial System (UAS) vehicles throughout U.S. airspace; Real-time system-wide safety assurance technologies to allow safe, increasingly autonomous aviation operations. SNTB is primarily accessed through a web browser. A set of secure support services are provided to simplify all aspects of real-time, human-in-the-loop and automation-in-the-loop simulations from design (i.e., prior to execution) through analysis (i.e., after execution). These services include simulation architecture and asset configuration; scenario generation; command, control and monitoring; and analysis support.

  6. Retooling the nurse executive for 21st century practice: decision support systems.

    PubMed

    Fralic, M F; Denby, C B

    2000-01-01

    Health care financing and care delivery systems are changing at almost warp speed. This requires new responses and new capabilities from contemporary nurse executives and calls for new approaches to the preparation of the next generation of nursing leaders. The premise of this article is that, in these highly unstable environments, the nurse executive faces the need to make high-impact decisions in relatively short time frames. A standardized process for objective decision making becomes essential. This article describes that process.

  7. Coverability graphs for a class of synchronously executed unbounded Petri net

    NASA Technical Reports Server (NTRS)

    Stotts, P. David; Pratt, Terrence W.

    1990-01-01

    After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.

  8. Memoized Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz

    2012-01-01

    This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.

  9. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  10. 77 FR 71410 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-30

    ...: 5 p.m. ET 12/10/12. Docket Numbers: ER13-353-001. Applicants: Alcoa Power Generating Inc...-001. Applicants: Alcoa Power Generating Inc. Description: Executed APGI-TVA Interconnection Agreement...

  11. Does Mind Wandering Reflect Executive Function or Executive Failure? Comment on Smallwood and Schooler (2006) and Watkins (2008)

    PubMed Central

    McVay, Jennifer C.; Kane, Michael J.

    2010-01-01

    In this Comment, we contrast different conceptions of mind wandering that were presented in two recent theoretical reviews: Smallwood and Schooler (2006) and Watkins (2008). We also introduce a new perspective on the role of executive control in mind wandering by integrating empirical evidence presented in Smallwood and Schooler (2006) with two theoretical frameworks: Watkins’s (2008) elaborated control theory and Klinger’s (1971; 2009) current concerns theory. In contrast to the Smallwood-Schooler claim that mind-wandering recruits executive resources, we argue that mind wandering represents a failure of executive control and that it is dually determined by the presence of automatically generated thoughts in response to environmental and mental cues and the ability of the executive-control system to deal with this interference. We present empirical support for this view from experimental, neuroimaging, and individual-differences research. PMID:20192557

  12. Long Read Alignment with Parallel MapReduce Cloud Platform

    PubMed Central

    Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki

    2015-01-01

    Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms. PMID:26839887

  13. Long Read Alignment with Parallel MapReduce Cloud Platform.

    PubMed

    Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki

    2015-01-01

    Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.

  14. Analysis of area-time efficiency for an integrated focal plane architecture

    NASA Astrophysics Data System (ADS)

    Robinson, William H.; Wills, D. Scott

    2003-05-01

    Monolithic integration of photodetectors, analog-to-digital converters, digital processing, and data storage can improve the performance and efficiency of next-generation portable image products. Our approach combines these components into a single processing element, which is tiled to form a SIMD focal plane processor array with the capability to execute early image applications such as median filtering (noise removal), convolution (smoothing), and inside edge detection (segmentation). Digitizing and processing a pixel at the detection site presents new design challenges, including the allocation of silicon resources. This research investigates the area-time (A"T2) efficiency by adjusting the number of Pixels-per-Processing Element (PPE). Area calculations are based upon hardware implementations of components scaled for 250nm or 120nm technology. The total execution time is calculated from the sequential execution of each application on a generic focal plane architectural simulator. For a Quad-CIF system resolution (176×144), results show that 1 PPE provides the optimal area-time efficiency (5.7 μs2 x mm2 for 250nm, 1.7 μs2 x mm2 for 120nm) but requires a large silicon chip (2072mm2 for 250nm, 614mm2 for 120nm). Increasing the PPE to 4 or 16 can reduce silicon area by 48% and 60% respectively (120nm technology) while maintaining performance within real-time constraints.

  15. Validity of the Cambridge Cognitive Examination-Revised new Executive Function Scores in the diagnosis of dementia: some early findings.

    PubMed

    Heinik, Jeremia; Solomesh, Isaac

    2007-03-01

    The Cambridge Cognitive Examination-Revised introduces 2 new executive items (Ideational Fluency and Visual Reasoning), which separately or combined with 2 executive items in the former version (word list generation and similarities) might constitute an Executive Function Score (EFS). The authors studied the validity of these new EFSs in 51 demented (dementia of the Alzheimer's type, vascular dementia) and nondemented individuals (depressives and normals). The new EFSs were found valid to accurately differentiate between demented and nondemented subjects; however, they were considerably less so when specific diagnoses were considered. Correlations between the variously combined executive scores and the cognitive scales and subscales studied were prevalently low to moderate, and ranged from high and significant to low and nonsignificant when the 4 executive items were correlated to each other. The ability of the executive scores to discriminate demented from nondemented individuals was lower compared with the Cambridge Cognitive Examination-Revised scores. EFS was found internally consistent.

  16. Television and children's executive function.

    PubMed

    Lillard, Angeline S; Li, Hui; Boguszewski, Katie

    2015-01-01

    Children spend a lot of time watching television on its many platforms: directly, online, and via videos and DVDs. Many researchers are concerned that some types of television content appear to negatively influence children's executive function. Because (1) executive function predicts key developmental outcomes, (2) executive function appears to be influenced by some television content, and (3) American children watch large quantities of television (including the content of concern), the issues discussed here comprise a crucial public health issue. Further research is needed to reveal exactly what television content is implicated, what underlies television's effect on executive function, how long the effect lasts, and who is affected. © 2015 Elsevier Inc. All rights reserved.

  17. An expert system executive for automated assembly of large space truss structures

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1993-01-01

    Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.

  18. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  19. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  20. The Entrepreneur at the Helm

    ERIC Educational Resources Information Center

    Fain, Paul

    2008-01-01

    College trustees are hiring more leaders from outside the academy, and it is the self-made executives who generate the buzz. The reason, experts say, is that presidents with corporate experience know the vagaries of the marketplace, as well as the language of lawmakers, corporate executives, and donors. Too many college chiefs, says one source,…

  1. Resolving Task Rule Incongruence during Task Switching by Competitor Rule Suppression

    ERIC Educational Resources Information Center

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-01-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an…

  2. Executing Quality: A Grounded Theory of Child Care Quality Improvement Engagement Process in Pennsylvania

    ERIC Educational Resources Information Center

    Critchosin, Heather

    2014-01-01

    Executing Quality describes the perceived process experienced by participants while engaging in Keystone Standards, Training, Assistance, Resources, and Support (Keystone STARS) quality rating improvement system (QRIS). The purpose of this qualitative inquiry was to understand the process of Keystone STARS engagement in order to generate a…

  3. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  4. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere

    NASA Technical Reports Server (NTRS)

    Gorski, K. M.; Hivon, Eric; Banday, A. J.; Wandelt, Benjamin D.; Hansen, Frode K.; Reinecke, Mstvos; Bartelmann, Matthia

    2005-01-01

    HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.

  6. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less

  7. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS

    PubMed Central

    Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2016-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971

  8. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.

    PubMed

    Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2017-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.

  9. Semantics-based distributed I/O with the ParaMEDIC framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.; Feng, W.; Lin, H.

    2008-01-01

    Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less

  10. Cloudgene: A graphical execution platform for MapReduce programs on private and public clouds

    PubMed Central

    2012-01-01

    Background The MapReduce framework enables a scalable processing and analyzing of large datasets by distributing the computational load on connected computer nodes, referred to as a cluster. In Bioinformatics, MapReduce has already been adopted to various case scenarios such as mapping next generation sequencing data to a reference genome, finding SNPs from short read data or matching strings in genotype files. Nevertheless, tasks like installing and maintaining MapReduce on a cluster system, importing data into its distributed file system or executing MapReduce programs require advanced knowledge in computer science and could thus prevent scientists from usage of currently available and useful software solutions. Results Here we present Cloudgene, a freely available platform to improve the usability of MapReduce programs in Bioinformatics by providing a graphical user interface for the execution, the import and export of data and the reproducibility of workflows on in-house (private clouds) and rented clusters (public clouds). The aim of Cloudgene is to build a standardized graphical execution environment for currently available and future MapReduce programs, which can all be integrated by using its plug-in interface. Since Cloudgene can be executed on private clusters, sensitive datasets can be kept in house at all time and data transfer times are therefore minimized. Conclusions Our results show that MapReduce programs can be integrated into Cloudgene with little effort and without adding any computational overhead to existing programs. This platform gives developers the opportunity to focus on the actual implementation task and provides scientists a platform with the aim to hide the complexity of MapReduce. In addition to MapReduce programs, Cloudgene can also be used to launch predefined systems (e.g. Cloud BioLinux, RStudio) in public clouds. Currently, five different bioinformatic programs using MapReduce and two systems are integrated and have been successfully deployed. Cloudgene is freely available at http://cloudgene.uibk.ac.at. PMID:22888776

  11. Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory.

    PubMed

    Weinfurt, Kevin P; Hernandez, Adrian F; Coronado, Gloria D; DeBar, Lynn L; Dember, Laura M; Green, Beverly B; Heagerty, Patrick J; Huang, Susan S; James, Kathryn T; Jarvik, Jeffrey G; Larson, Eric B; Mor, Vincent; Platt, Richard; Rosenthal, Gary E; Septimus, Edward J; Simon, Gregory E; Staman, Karen L; Sugarman, Jeremy; Vazquez, Miguel; Zatzick, Douglas; Curtis, Lesley H

    2017-09-18

    The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped. To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design. In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs. A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity.

  12. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  13. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    PubMed

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  14. Closing the Guantanamo Detection Center: Legal Issues

    DTIC Science & Technology

    2009-04-14

    issues likely to arise as a result of executive and legislative action to close the Guantanamo detention facility. It discusses legal issues related to...detention or other wartime actions taken by the Executive. The Bush Administration initially believed that Guantanamo was largely beyond the...C. Henning. This report provides an overview of major legal issues that are likely to arise as a result of executive and legislative action to

  15. General Temporal Knowledge for Planning and Data Mining

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Khatib, Lina

    2001-01-01

    We consider the architecture of systems that combine temporal planning and plan execution and introduce a layer of temporal reasoning that potential1y improves both the communication between humans and such systems, and the performance of the temporal planner itself. In particular, this additional layer simultaneously supports more flexibility in specifying and maintaining temporal constraints on plans within an uncertain and changing execution environment, and the ability to understand and trace the progress of plan execution. It is shown how a representation based on single set of abstractions of temporal information can be used to characterize the reasoning underlying plan generation and execution interpretation. The complexity of such reasoning is discussed.

  16. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  17. Program Instrumentation and Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2002-01-01

    Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.

  18. Family matters: Intergenerational and interpersonal processes of executive function and attentive behavior

    PubMed Central

    Deater-Deckard, Kirby

    2014-01-01

    Individual differences in self-regulation include executive function (EF) components that serve self-regulation of attentive behavior by modulating reactive responses to the environment. These factors “run in families”. The purpose of this review is to summarize a program of research that addresses familial inter-generational transmission and inter-personal processes in development. Self-regulation of attentive behavior involves inter-related aspects of executive function (EF) including attention, inhibitory control, and working memory. Individual differences in EF skills develop in systematic ways over childhood, resulting in moderately stable differences between people by early adolescence. Through complex gene-environment transactions, EF is transmitted across generations within parent-child relationships that provide powerful socialization and experiential contexts in which EF and related attentive behavior are forged and practiced. Families matter as parents regulate home environments and themselves as best they can while also supporting cognitive self-regulation of attentive behavior in their children. PMID:25197171

  19. Resource allocation and supervisory control architecture for intelligent behavior generation

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason

    2003-09-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.

  20. Planning, Execution, and Assessment of Effects-Based Operations (EBO)

    DTIC Science & Technology

    2006-05-01

    time of execution that would maximize the likelihood of achieving a desired effect. GMU has developed a methodology, named ECAD -EA (Effective...Algorithm EBO Effects Based Operations ECAD -EA Effective Course of Action-Evolutionary Algorithm GMU George Mason University GUI Graphical...Probability Profile Generation ........................................................72 A.2.11 Running ECAD -EA (Effective Courses of Action Determination

  1. KEEL for Mission Planning

    DTIC Science & Technology

    2016-10-06

    Copyright 2016, Compsim, All Rights Reserved 1 KEEL® Technology in support of Mission Planning and Execution delivering Adaptive...Executing, and Auditing ) This paper focuses on the decision-making component (#2) with the use of Knowledge Enhanced Electronic logic (KEEL) Technology ...Copyright 2016, Compsim, All Rights Reserved 2 • Eliminate “coding errors” (auto-generated code) • 100% explainable and auditable

  2. Stereotyped Behaviour in Children with Autism and Intellectual Disability: An Examination of the Executive Dysfunction Hypothesis

    ERIC Educational Resources Information Center

    Sayers, N.; Oliver, C.; Ruddick, L.; Wallis, B.

    2011-01-01

    Background: Increasing attention has been paid to the executive dysfunction hypothesis argued to underpin stereotyped behaviour in autism. The aim of this study is to investigate one component of this model, that stereotyped behaviours are related to impaired generativity and compromised behavioural inhibition, by examining whether episodes of…

  3. Theory for long memory in supply and demand

    NASA Astrophysics Data System (ADS)

    Lillo, Fabrizio; Mike, Szabolcs; Farmer, J. Doyne

    2005-06-01

    Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quant. Finance 4, 176 (2004); F. Lillo and J. D. Farmer Dyn. Syst. Appl. 8, 3 (2004)]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power-law distributed, this gives rise to power-law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume v is proportional to v-α and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag τ is asymptotically proportional to τ-(α-1) . This is a long-memory process when α<2 . With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long memory of price diffusion rates.

  4. Theory for long memory in supply and demand.

    PubMed

    Lillo, Fabrizio; Mike, Szabolcs; Farmer, J Doyne

    2005-06-01

    Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quant. Finance 4, 176 (2004); F. Lillo and J. D. Farmer Dyn. Syst. Appl. 8, 3 (2004)]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power-law distributed, this gives rise to power-law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume v is proportional to v(-alpha) and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag tau is asymptotically proportional to tau(-(alpha-1)). This is a long-memory process when alpha < 2. With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long memory of price diffusion rates.

  5. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  6. Executive Information Systems for Providing Next Generation Strategic Information: An Evaluation of EIS (Executive Information System) Software and Recommended Applicability within the FAA Computing Environment

    DTIC Science & Technology

    1989-01-01

    the FAA Computing Environment 7. Author(s) S. Performing Organization Report No. MT/O1-89. Al 9. Performing Organization Name and Address 10. Work Unit...him in advance by analysts and developers -- an electronic3 version of the Performance Indicators report. Ease of Use. pcEXPRESS has an automatic link...overcome within the required timeframe. I These advanced features of the EXPRESS system allow the fastest possible response to changing executive information

  7. Default Mode and Executive Networks Areas: Association with the Serial Order in Divergent Thinking

    PubMed Central

    Heinonen, Jarmo; Numminen, Jussi; Hlushchuk, Yevhen; Antell, Henrik; Taatila, Vesa; Suomala, Jyrki

    2016-01-01

    Scientific findings have suggested a two-fold structure of the cognitive process. By using the heuristic thinking mode, people automatically process information that tends to be invariant across days, whereas by using the explicit thinking mode people explicitly process information that tends to be variant compared to typical previously learned information patterns. Previous studies on creativity found an association between creativity and the brain regions in the prefrontal cortex, the anterior cingulate cortex, the default mode network and the executive network. However, which neural networks contribute to the explicit mode of thinking during idea generation remains an open question. We employed an fMRI paradigm to examine which brain regions were activated when participants (n = 16) mentally generated alternative uses for everyday objects. Most previous creativity studies required participants to verbalize responses during idea generation, whereas in this study participants produced mental alternatives without verbalizing. This study found activation in the left anterior insula when contrasting idea generation and object identification. This finding suggests that the insula (part of the brain’s salience network) plays a role in facilitating both the central executive and default mode networks to activate idea generation. We also investigated closely the effect of the serial order of idea being generated on brain responses: The amplitude of fMRI responses correlated positively with the serial order of idea being generated in the anterior cingulate cortex, which is part of the central executive network. Positive correlation with the serial order was also observed in the regions typically assigned to the default mode network: the precuneus/cuneus, inferior parietal lobule and posterior cingulate cortex. These networks support the explicit mode of thinking and help the individual to convert conventional mental models to new ones. The serial order correlated negatively with the BOLD responses in the posterior presupplementary motor area, left premotor cortex, right cerebellum and left inferior frontal gyrus. This finding might imply that idea generation without a verbal processing demand reflecting lack of need for new object identification in idea generation events. The results of the study are consistent with recent creativity studies, which emphasize that the creativity process involves working memory capacity to spontaneously shift between different kinds of thinking modes according to the context. PMID:27627760

  8. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC a

  9. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. IAC was developed in 1985.

  10. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  11. Third Congress on Information System Science and Technology

    DTIC Science & Technology

    1968-04-01

    versions of the same compiler. The " fast compile-slow execute" and the "slow compile- fast execute" gimmick is the greatest hoax ever per- petrated on the... fast such natural language analysis and translation can be accomplished. If the fairly superficial syntactic anal- ysis of a sentence which is...two kinds of computer: a fast computer with large immediate access and bulk memory for rear echelon and large installation em- ployment, and a

  12. Executive summary: Mod-1 wind turbine generator analysis and design report

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Activities leading to the detail design of a wind turbine generator having a nominal rating of 1.8 megawatts are reported. Topics covered include (1) system description; (2) structural dynamics; (3) stability analysis; (4) mechanical subassemblies design; (5) power generation subsystem; and (6) control and instrumentation subsystem.

  13. Trip generation rates, peaking characteristics, and vehicle mix characteristics of special West Virginia generators : executive summary.

    DOT National Transportation Integrated Search

    2000-02-01

    For a number of land uses, published trip rates were not appropriate for application in West Virginia. There are a number of so-called special generators, which are either unique to West Virginia (i.e., regional jails) or have assumed increased impor...

  14. Design of an intelligent information system for in-flight emergency assistance

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Karamouzis, Stamos

    1991-01-01

    The present research has as its goal the development of AI tools to help flight crews cope with in-flight malfunctions. The relevant tasks in such situations include diagnosis, prognosis, and recovery plan generation. Investigation of the information requirements of these tasks has shown that the determination of paths figures largely: what components or systems are connected to what others, how are they connected, whether connections satisfying certain criteria exist, and a number of related queries. The formulation of such queries frequently requires capabilities of the second-order predicate calculus. An information system is described that features second-order logic capabilities, and is oriented toward efficient formulation and execution of such queries.

  15. How minimal executive feedback influences creative idea generation

    PubMed Central

    Camarda, Anaëlle; Agogué, Marine; Houdé, Olivier; Weil, Benoît; Le Masson, Pascal

    2017-01-01

    The fixation effect is known as one of the most dominant of the cognitive biases against creativity and limits individuals’ creative capacities in contexts of idea generation. Numerous techniques and tools have been established to help overcome these cognitive biases in various disciplines ranging from neuroscience to design sciences. Several works in the developmental cognitive sciences have discussed the importance of inhibitory control and have argued that individuals must first inhibit the spontaneous ideas that come to their mind so that they can generate creative solutions to problems. In line with the above discussions, in the present study, we performed an experiment on one hundred undergraduates from the Faculty of Psychology at Paris Descartes University, in which we investigated a minimal executive feedback-based learning process that helps individuals inhibit intuitive paths to solutions and then gradually drive their ideation paths toward creativity. Our results provide new insights into novel forms of creative leadership for idea generation. PMID:28662154

  16. XSECT: A computer code for generating fuselage cross sections - user's manual

    NASA Technical Reports Server (NTRS)

    Ames, K. R.

    1982-01-01

    A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.

  17. Strategy combination during execution of memory strategies in young and older adults.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Touron, Dayna

    2017-05-01

    The present study investigated whether people can combine two memory strategies to encode pairs of words more efficiently than with a single strategy, and age-related differences in such strategy combination. Young and older adults were asked to encode pairs of words (e.g., satellite-tunnel). For each item, participants were told to use either the interactive-imagery strategy (e.g., mentally visualising the two words and making them interact), the sentence-generation strategy (i.e., generate a sentence linking the two words), or with strategy combination (i.e., generating a sentence while mentally visualising it). Participants obtained better recall performance on items encoded with strategy combination than on items encoded with interactive-imagery or sentence-generation strategies. Moreover, we found age-related decline in such strategy combination. These findings have important implications to further our understanding of execution of memory strategies, and suggest that strategy combination occurs in a variety of cognitive domains.

  18. How minimal executive feedback influences creative idea generation.

    PubMed

    Ezzat, Hicham; Camarda, Anaëlle; Cassotti, Mathieu; Agogué, Marine; Houdé, Olivier; Weil, Benoît; Le Masson, Pascal

    2017-01-01

    The fixation effect is known as one of the most dominant of the cognitive biases against creativity and limits individuals' creative capacities in contexts of idea generation. Numerous techniques and tools have been established to help overcome these cognitive biases in various disciplines ranging from neuroscience to design sciences. Several works in the developmental cognitive sciences have discussed the importance of inhibitory control and have argued that individuals must first inhibit the spontaneous ideas that come to their mind so that they can generate creative solutions to problems. In line with the above discussions, in the present study, we performed an experiment on one hundred undergraduates from the Faculty of Psychology at Paris Descartes University, in which we investigated a minimal executive feedback-based learning process that helps individuals inhibit intuitive paths to solutions and then gradually drive their ideation paths toward creativity. Our results provide new insights into novel forms of creative leadership for idea generation.

  19. A translator writing system for microcomputer high-level languages and assemblers

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Knight, J. C.; Noonan, R. E.

    1980-01-01

    In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.

  20. Age differences in high frequency phasic heart rate variability and performance response to increased executive function load in three executive function tasks

    PubMed Central

    Byrd, Dana L.; Reuther, Erin T.; McNamara, Joseph P. H.; DeLucca, Teri L.; Berg, William K.

    2015-01-01

    The current study examines similarity or disparity of a frontally mediated physiological response of mental effort among multiple executive functioning tasks between children and adults. Task performance and phasic heart rate variability (HRV) were recorded in children (6 to 10 years old) and adults in an examination of age differences in executive functioning skills during periods of increased demand. Executive load levels were varied by increasing the difficulty levels of three executive functioning tasks: inhibition (IN), working memory (WM), and planning/problem solving (PL). Behavioral performance decreased in all tasks with increased executive demand in both children and adults. Adults’ phasic high frequency HRV was suppressed during the management of increased IN and WM load. Children’s phasic HRV was suppressed during the management of moderate WM load. HRV was not suppressed during either children’s or adults’ increasing load during the PL task. High frequency phasic HRV may be most sensitive to executive function tasks that have a time-response pressure, and simply requiring performance on a self-paced task requiring frontal lobe activation may not be enough to generate HRV responsitivity to increasing demand. PMID:25798113

  1. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    PubMed

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. A Survey of New Trends in Symbolic Execution for Software Testing and Analysis

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Visser, Willem

    2009-01-01

    Symbolic execution is a well-known program analysis technique which represents values of program inputs with symbolic values instead of concrete (initialized) data and executes the program by manipulating program expressions involving the symbolic values. Symbolic execution has been proposed over three decades ago but recently it has found renewed interest in the research community, due in part to the progress in decision procedures, availability of powerful computers and new algorithmic developments. We provide a survey of some of the new research trends in symbolic execution, with particular emphasis on applications to test generation and program analysis. We first describe an approach that handles complex programming constructs such as input data structures, arrays, as well as multi-threading. We follow with a discussion of abstraction techniques that can be used to limit the (possibly infinite) number of symbolic configurations that need to be analyzed for the symbolic execution of looping programs. Furthermore, we describe recent hybrid techniques that combine concrete and symbolic execution to overcome some of the inherent limitations of symbolic execution, such as handling native code or availability of decision procedures for the application domain. Finally, we give a short survey of interesting new applications, such as predictive testing, invariant inference, program repair, analysis of parallel numerical programs and differential symbolic execution.

  3. The Broad Superintendents Academy, 2007

    ERIC Educational Resources Information Center

    Broad Foundation, 2007

    2007-01-01

    The Broad Superintendents Academy is an executive training program that identifies and prepares prominent leaders--executives with experience successfully leading large organizations and a passion for public service--then places them in urban school districts to dramatically improve the quality of education for America's students. This brochure…

  4. The Impact of Religiously Affiliated Universities and Courses in Ethics and Religious Studies on Students' Attitude toward Business Ethics

    ERIC Educational Resources Information Center

    Comegys, Charles

    2010-01-01

    Unfortunate unethical events are continuing in the business arena and now more than ever these business judgmental shortcoming focus attention on the ethics of business executives. Thus, colleges and universities must continue to address business ethics as they prepare and train the next generation of executives. Educational institutions should be…

  5. Web Program for Development of GUIs for Cluster Computers

    NASA Technical Reports Server (NTRS)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  6. Automated procedure execution for space vehicle autonomous control

    NASA Technical Reports Server (NTRS)

    Broten, Thomas A.; Brown, David A.

    1990-01-01

    Increased operational autonomy and reduced operating costs have become critical design objectives in next-generation NASA and DoD space programs. The objective is to develop a semi-automated system for intelligent spacecraft operations support. The Spacecraft Operations and Anomaly Resolution System (SOARS) is presented as a standardized, model-based architecture for performing High-Level Tasking, Status Monitoring and automated Procedure Execution Control for a variety of spacecraft. The particular focus is on the Procedure Execution Control module. A hierarchical procedure network is proposed as the fundamental means for specifying and representing arbitrary operational procedures. A separate procedure interpreter controls automatic execution of the procedure, taking into account the current status of the spacecraft as maintained in an object-oriented spacecraft model.

  7. On the Information Content of Program Traces

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.

  8. Symbolic Execution Enhanced System Testing

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Pasareanu, Corina S.; Raman, Vishwanath

    2012-01-01

    We describe a testing technique that uses information computed by symbolic execution of a program unit to guide the generation of inputs to the system containing the unit, in such a way that the unit's, and hence the system's, coverage is increased. The symbolic execution computes unit constraints at run-time, along program paths obtained by system simulations. We use machine learning techniques treatment learning and function fitting to approximate the system input constraints that will lead to the satisfaction of the unit constraints. Execution of system input predictions either uncovers new code regions in the unit under analysis or provides information that can be used to improve the approximation. We have implemented the technique and we have demonstrated its effectiveness on several examples, including one from the aerospace domain.

  9. Quasi-automatic 3D finite element model generation for individual single-rooted teeth and periodontal ligament.

    PubMed

    Clement, R; Schneider, J; Brambs, H-J; Wunderlich, A; Geiger, M; Sander, F G

    2004-02-01

    The paper demonstrates how to generate an individual 3D volume model of a human single-rooted tooth using an automatic workflow. It can be implemented into finite element simulation. In several computational steps, computed tomography data of patients are used to obtain the global coordinates of the tooth's surface. First, the large number of geometric data is processed with several self-developed algorithms for a significant reduction. The most important task is to keep geometrical information of the real tooth. The second main part includes the creation of the volume model for tooth and periodontal ligament (PDL). This is realized with a continuous free form surface of the tooth based on the remaining points. Generating such irregular objects for numerical use in biomechanical research normally requires enormous manual effort and time. The finite element mesh of the tooth, consisting of hexahedral elements, is composed of different materials: dentin, PDL and surrounding alveolar bone. It is capable of simulating tooth movement in a finite element analysis and may give valuable information for a clinical approach without the restrictions of tetrahedral elements. The mesh generator of FE software ANSYS executed the mesh process for hexahedral elements successfully.

  10. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE PAGES

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    2016-02-01

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  11. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  12. Central executive involvement in children's spatial memory.

    PubMed

    Ang, Su Yin; Lee, Kerry

    2008-11-01

    Previous research with adults found that spatial short-term and working memory tasks impose similar demands on executive resources. We administered spatial short-term and working memory tasks to 8- and 11-year-olds in three separate experiments. In Experiments 1 and 2 an executive suppression task (random number generation) was found to impair performances on a short-term memory task (Corsi blocks), a working memory task (letter rotation), and a spatial visualisation task (paper folding). In Experiment 3 an articulatory suppression task only impaired performance on the working memory task. These results suggest that short-term and working memory performances are dependent on executive resources. The degree to which the short-term memory task was dependent on executive resources was expected to be related to the amount of experience children have had with such tasks. Yet we found no significant age-related suppression effects. This was attributed to differences in employment of cognitive strategies by the older children.

  13. [An approach to the executive functions in autism spectrum disorder].

    PubMed

    Martos-Pérez, Juan; Paula-Pérez, Isabel

    2011-03-01

    The psychological hypothesis of executive dysfunction plays a crucial role in explaining the behavioural phenotype of persons with autism spectrum disorders (ASD), along with other hypotheses such as the deficit in the theory of mind or the weak central coherence hypothesis. Yet, none of these hypotheses are mutually exclusive and behaviours that have their origins in one of these three hypotheses are also shaped and upheld by other processes and factors. This article reviews the behavioural manifestation and current state of research on the executive functions in persons with ASD. It also examines its impact on planning, mental flexibility and cognitive skills, generativity, response inhibition, mentalist skills and sense of activity. Although executive dysfunction has become more significant as a hypothesis explaining persons with ASD, there remain some important difficulties in need of further, more detailed research. Moreover, very few intervention programmes have been proved to be effective in minimising the effects of executive dysfunction in autism.

  14. A quantum physical design flow using ILP and graph drawing

    NASA Astrophysics Data System (ADS)

    Yazdani, Maryam; Saheb Zamani, Morteza; Sedighi, Mehdi

    2013-10-01

    Implementing large-scale quantum circuits is one of the challenges of quantum computing. One of the central challenges of accurately modeling the architecture of these circuits is to schedule a quantum application and generate the layout while taking into account the cost of communications and classical resources as well as the maximum exploitable parallelism. In this paper, we present and evaluate a design flow for arbitrary quantum circuits in ion trap technology. Our design flow consists of two parts. First, a scheduler takes a description of a circuit and finds the best order for the execution of its quantum gates using integer linear programming regarding the classical resources (qubits) and instruction dependencies. Then a layout generator receives the schedule produced by the scheduler and generates a layout for this circuit using a graph-drawing algorithm. Our experimental results show that the proposed flow decreases the average latency of quantum circuits by about 11 % for a set of attempted benchmarks and by about 9 % for another set of benchmarks compared with the best in literature.

  15. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  16. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.

    PubMed

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos

    2017-08-01

    Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.

  17. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines

    PubMed Central

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis

    2017-01-01

    Abstract Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a “meta-script” that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. PMID:28854616

  18. A Program Management Framework for Facilities Managers

    ERIC Educational Resources Information Center

    King, Dan

    2012-01-01

    The challenge faced by senior facility leaders is not how to execute a single project, but rather, how to successfully execute a large program consisting of hundreds of projects. Senior facilities officers at universities, school districts, hospitals, airports, and other organizations with extensive facility inventories, typically manage project…

  19. Executive Functions in Children with Communications Impairments, in Relation to Autistic Symptomatology. I: Generativity

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Norbury, Courtenay Frazier

    2005-01-01

    Previous research has found that people with autism generate few novel responses in ideational fluency tasks, and it has been suggested this deficit is a specific correlate of stereotyped/repetitive behavior. We assessed generativity in children with pragmatic language impairment (PLI) who showed communicative abnormalities resembling those seen…

  20. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-11-15

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  1. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-10-11

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  2. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0.

    PubMed

    Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam

    2017-09-01

    Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  3. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  4. Next Generation * Natural Gas (NG)2 Information Requirements--Executive Summary

    EIA Publications

    2000-01-01

    The Energy Information Administration (EIA) has initiated the Next Generation * Natural Gas (NG)2 project to design and implement a new and comprehensive information program for natural gas to meet customer requirements in the post-2000 time frame.

  5. Lunar mission safety and rescue: Executive summary

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An executive summary is presented of the escape/rescue and the hazards analyses for manned missions and operations in the 1980 time frame. The method of approach, basic data generated, and significant results are outlined, and highlights of the two analyses are given. Areas in which research or technical development efforts could improve mission safety, and specific suggestions for additional effort studies on safety analyses are listed.

  6. The Socioeconomic Benefits Generated by Pima Community College. Executive Summary [and] Volume 1: Main Report.

    ERIC Educational Resources Information Center

    Christophersen, Kjell A.; Robison, M. Henry

    This paper examines the ways in which the State of Arizona and the local economy benefit from the presence of the Pima Community College (PCC) District. After the Executive Summary, Volume 1, the Main Report, discusses findings from the study. The Pima Community College District paid $68.2 million in direct faculty and staff wages and salaries in…

  7. The mediating role of metacognition in the relationship between executive function and self-regulated learning.

    PubMed

    Follmer, D Jake; Sperling, Rayne A

    2016-12-01

    Researchers have demonstrated significant relations among executive function, metacognition, and self-regulated learning. However, prior research emphasized the use of indirect measures of executive function and did not evaluate how specific executive functions are related to participants' self-regulated learning. The primary goals of the current study were to examine and test the relations among executive function, metacognition, and self-regulated learning as well as to examine how self-regulated learning is informed by executive function. The sample comprised 117 undergraduate students attending a large, Mid-Atlantic research university in the United States. Participants were individually administered direct and indirect measures of executive function, metacognition, and self-regulated learning. A mediation model specifying the relations among the regulatory constructs was proposed. In multiple linear regression analyses, executive function predicted metacognition and self-regulated learning. Direct measures of inhibition and shifting accounted for a significant amount of the variance in metacognition and self-regulated learning beyond an indirect measure of executive functioning. Separate mediation analyses indicated that metacognition mediated the relationship between executive functioning and self-regulated learning as well as between specific executive functions and self-regulated learning. The findings of this study are supported by previous research documenting the relations between executive function and self-regulated learning, and extend prior research by examining the manner in which executive function and self-regulated learning are linked. The findings provide initial support for executive functions as key processes, mediated by metacognition, that predict self-regulated learning. Implications for the contribution of executive functions to self-regulated learning are discussed. © 2016 The British Psychological Society.

  8. 3D full-field quantification of cell-induced large deformations in fibrillar biomaterials by combining non-rigid image registration with label-free second harmonic generation.

    PubMed

    Jorge-Peñas, Alvaro; Bové, Hannelore; Sanen, Kathleen; Vaeyens, Marie-Mo; Steuwe, Christian; Roeffaers, Maarten; Ameloot, Marcel; Van Oosterwyck, Hans

    2017-08-01

    To advance our current understanding of cell-matrix mechanics and its importance for biomaterials development, advanced three-dimensional (3D) measurement techniques are necessary. Cell-induced deformations of the surrounding matrix are commonly derived from the displacement of embedded fiducial markers, as part of traction force microscopy (TFM) procedures. However, these fluorescent markers may alter the mechanical properties of the matrix or can be taken up by the embedded cells, and therefore influence cellular behavior and fate. In addition, the currently developed methods for calculating cell-induced deformations are generally limited to relatively small deformations, with displacement magnitudes and strains typically of the order of a few microns and less than 10% respectively. Yet, large, complex deformation fields can be expected from cells exerting tractions in fibrillar biomaterials, like collagen. To circumvent these hurdles, we present a technique for the 3D full-field quantification of large cell-generated deformations in collagen, without the need of fiducial markers. We applied non-rigid, Free Form Deformation (FFD)-based image registration to compute full-field displacements induced by MRC-5 human lung fibroblasts in a collagen type I hydrogel by solely relying on second harmonic generation (SHG) from the collagen fibrils. By executing comparative experiments, we show that comparable displacement fields can be derived from both fibrils and fluorescent beads. SHG-based fibril imaging can circumvent all described disadvantages of using fiducial markers. This approach allows measuring 3D full-field deformations under large displacement (of the order of 10 μm) and strain regimes (up to 40%). As such, it holds great promise for the study of large cell-induced deformations as an inherent component of cell-biomaterial interactions and cell-mediated biomaterial remodeling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Experiments with Test Case Generation and Runtime Analysis

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Drusinsky, Doron; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareanu, Corina; Rosu, Grigore; Visser, Willem; Koga, Dennis (Technical Monitor)

    2003-01-01

    Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.

  10. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  11. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  12. Development of Integrated Modular Avionics Application Based on Simulink and XtratuM

    NASA Astrophysics Data System (ADS)

    Fons-Albert, Borja; Usach-Molina, Hector; Vila-Carbo, Joan; Crespo-Lorente, Alfons

    2013-08-01

    This paper presents an integral approach for designing avionics applications that meets the requirements for software development and execution of this application domain. Software design follows the Model-Based design process and is performed in Simulink. This approach allows easy and quick testbench development and helps satisfying DO-178B requirements through the use of proper tools. The software execution platform is based on XtratuM, a minimal bare-metal hypervisor designed in our research group. XtratuM provides support for IMA-SP (Integrated Modular Avionics for Space) architectures. This approach allows the code generation of a Simulink model to be executed on top of Lithos as XtratuM partition. Lithos is a ARINC-653 compliant RTOS for XtratuM. The paper concentrates in how to smoothly port Simulink designs to XtratuM solving problems like application partitioning, automatic code generation, real-time tasking, interfacing, and others. This process is illustrated with an autopilot design test using a flight simulator.

  13. Commentary: Mentoring the mentor: executive coaching for clinical departmental executive officers.

    PubMed

    Geist, Lois J; Cohen, Michael B

    2010-01-01

    Departmental executive officers (DEOs), department chairs, and department heads in medical schools are often hired on the basis of their accomplishments in research as well as their skills in administration, management, and leadership. These individuals are also expected to be expert in multiple areas, including negotiation, finance and budgeting, mentoring, and personnel management. At the same time, they are expected to maintain and perhaps even enhance their personal academic standing for the purposes of raising the level of departmental and institutional prestige and for recruiting the next generation of physicians and scientists. In the corporate world, employers understand the importance of training new leaders in requisite skill enhancement that will lead to success in their new positions. These individuals are often provided with extensive executive training to develop the necessary competencies to make them successful leaders. Among the tools employed for this purpose are the use of personal coaches or executive training courses. The authors propose that the use of executive coaching in academic medicine may be of benefit for new DEOs. Experience using an executive coach suggests that this was a valuable growth experience for new leaders in the institution.

  14. From Modelling to Execution of Enterprise Integration Scenarios: The GENIUS Tool

    NASA Astrophysics Data System (ADS)

    Scheibler, Thorsten; Leymann, Frank

    One of the predominant problems IT companies are facing today is Enterprise Application Integration (EAI). Most of the infrastructures built to tackle integration issues are proprietary because no standards exist for how to model, develop, and actually execute integration scenarios. EAI patterns gain importance for non-technical business users to ease and harmonize the development of EAI scenarios. These patterns describe recurring EAI challenges and propose possible solutions in an abstract way. Therefore, one can use those patterns to describe enterprise architectures in a technology neutral manner. However, patterns are documentation only used by developers and systems architects to decide how to implement an integration scenario manually. Thus, patterns are not theoretical thought to stand for artefacts that will immediately be executed. This paper presents a tool supporting a method how EAI patterns can be used to generate executable artefacts for various target platforms automatically using a model-driven development approach, hence turning patterns into something executable. Therefore, we introduce a continuous tool chain beginning at the design phase and ending in executing an integration solution in a completely automatically manner. For evaluation purposes we introduce a scenario demonstrating how the tool is utilized for modelling and actually executing an integration scenario.

  15. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  16. Motivation, emotion regulation, and the latent structure of psychopathology: An integrative and convergent historical perspective.

    PubMed

    Beauchaine, Theodore P; Zisner, Aimee

    2017-09-01

    Motivational models of psychopathology have long been advanced by psychophysiologists, and have provided key insights into neurobiological mechanisms of a wide range of psychiatric disorders. These accounts emphasize individual differences in activity and reactivity of bottom-up, subcortical neural systems of approach and avoidance in affecting behavior. Largely independent literatures emphasize the roles of top-down, cortical deficits in emotion regulation and executive function in conferring vulnerability to psychopathology. To date however, few models effectively integrate functions performed by bottom-up emotion generation system with those performed by top-down emotion regulation systems in accounting for alternative expressions of psychopathology. In this article, we present such a model, and describe how it accommodates the well replicated bifactor structure of psychopathology. We describe how excessive approach motivation maps directly into externalizing liability, how excessive passive avoidance motivation maps directly into internalizing liability, and how emotion dysregulation and executive function map onto general liability. This approach is consistent with the Research Domain Criteria initiative, which assumes that a limited number of brain systems interact to confer vulnerability to many if not most forms of psychopathology. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A new DoD initiative: the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program

    NASA Astrophysics Data System (ADS)

    Arevalo, S.; Atwood, C.; Bell, P.; Blacker, T. D.; Dey, S.; Fisher, D.; Fisher, D. A.; Genalis, P.; Gorski, J.; Harris, A.; Hill, K.; Hurwitz, M.; Kendall, R. P.; Meakin, R. L.; Morton, S.; Moyer, E. T.; Post, D. E.; Strawn, R.; Veldhuizen, D. v.; Votta, L. G.; Wynn, S.; Zelinski, G.

    2008-07-01

    In FY2008, the U.S. Department of Defense (DoD) initiated the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a 360M program with a two-year planning phase and a ten-year execution phase. CREATE will develop and deploy three computational engineering tool sets for DoD acquisition programs to use to design aircraft, ships and radio-frequency antennas. The planning and execution of CREATE are based on the 'lessons learned' from case studies of large-scale computational science and engineering projects. The case studies stress the importance of a stable, close-knit development team; a focus on customer needs and requirements; verification and validation; flexible and agile planning, management, and development processes; risk management; realistic schedules and resource levels; balanced short- and long-term goals and deliverables; and stable, long-term support by the program sponsor. Since it began in FY2008, the CREATE program has built a team and project structure, developed requirements and begun validating them, identified candidate products, established initial connections with the acquisition programs, begun detailed project planning and development, and generated the initial collaboration infrastructure necessary for success by its multi-institutional, multidisciplinary teams.

  18. Autoplan: A self-processing network model for an extended blocks world planning environment

    NASA Technical Reports Server (NTRS)

    Dautrechy, C. Lynne; Reggia, James A.; Mcfadden, Frank

    1990-01-01

    Self-processing network models (neural/connectionist models, marker passing/message passing networks, etc.) are currently undergoing intense investigation for a variety of information processing applications. These models are potentially very powerful in that they support a large amount of explicit parallel processing, and they cleanly integrate high level and low level information processing. However they are currently limited by a lack of understanding of how to apply them effectively in many application areas. The formulation of self-processing network methods for dynamic, reactive planning is studied. The long-term goal is to formulate robust, computationally effective information processing methods for the distributed control of semiautonomous exploration systems, e.g., the Mars Rover. The current research effort is focusing on hierarchical plan generation, execution and revision through local operations in an extended blocks world environment. This scenario involves many challenging features that would be encountered in a real planning and control environment: multiple simultaneous goals, parallel as well as sequential action execution, action sequencing determined not only by goals and their interactions but also by limited resources (e.g., three tasks, two acting agents), need to interpret unanticipated events and react appropriately through replanning, etc.

  19. Meta-analysis of neuropsychological measures of executive functioning in children and adolescents with high-functioning autism spectrum disorder.

    PubMed

    Lai, Chun Lun Eric; Lau, Zoe; Lui, Simon S Y; Lok, Eugenia; Tam, Venus; Chan, Quinney; Cheng, Koi Man; Lam, Siu Man; Cheung, Eric F C

    2017-05-01

    Existing literature on the profile of executive dysfunction in autism spectrum disorder showed inconsistent results. Age, comorbid attention-deficit/hyperactivity disorder (ADHD) and cognitive abilities appeared to play a role in confounding the picture. Previous meta-analyses have focused on a few components of executive functions. This meta-analysis attempted to delineate the profile of deficit in several components of executive functioning in children and adolescents with high-functioning autism spectrum disorder (HFASD). Ninety-eight English published case-control studies comparing children and adolescents with HFASD with typically developing controls using well-known neuropsychological measures to assess executive functions were included. Results showed that children and adolescents with HFASD were moderately impaired in verbal working memory (g = 0.67), spatial working memory (g = 0.58), flexibility (g = 0.59), planning (g = 0.62), and generativity (g = 0.60) except for inhibition (g = 0.41). Subgroup analysis showed that impairments were still significant for flexibility (g = 0.57-0.61), generativity (g = 0.52-0.68), and working memory (g = 0.49-0.56) in a sample of autism spectrum disorder (ASD) subjects without comorbid ADHD or when the cognitive abilities of the ASD group and the control group were comparable. This meta-analysis confirmed the presence of executive dysfunction in children and adolescents with HFASD. These deficits are not solely accounted for by the effect of comorbid ADHD and the general cognitive abilities. Our results support the executive dysfunction hypothesis and contribute to the clinical understanding and possible development of interventions to alleviate these deficits in children and adolescents with HFASD. Autism Res 2017, 10: 911-939. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Ultrafast Magnetization of a Dense Molecular Gas with an Optical Centrifuge.

    PubMed

    Milner, A A; Korobenko, A; Milner, V

    2017-06-16

    Strong laser-induced magnetization of oxygen gas at room temperature and atmospheric pressure is achieved experimentally on the subnanosecond time scale. The method is based on controlling the electronic spin of paramagnetic molecules by means of manipulating their rotation with an optical centrifuge. Spin-rotational coupling results in a high degree of spin polarization on the order of one Bohr magneton per centrifuged molecule. Owing to the nonresonant interaction with the laser pulses, the demonstrated technique is applicable to a broad class of paramagnetic rotors. Executed in a high-density gas, it may offer an efficient way of generating macroscopic magnetic fields remotely (as shown in this work) and producing a large amount of spin-polarized electrons.

  1. Ultrafast Magnetization of a Dense Molecular Gas with an Optical Centrifuge

    NASA Astrophysics Data System (ADS)

    Milner, A. A.; Korobenko, A.; Milner, V.

    2017-06-01

    Strong laser-induced magnetization of oxygen gas at room temperature and atmospheric pressure is achieved experimentally on the subnanosecond time scale. The method is based on controlling the electronic spin of paramagnetic molecules by means of manipulating their rotation with an optical centrifuge. Spin-rotational coupling results in a high degree of spin polarization on the order of one Bohr magneton per centrifuged molecule. Owing to the nonresonant interaction with the laser pulses, the demonstrated technique is applicable to a broad class of paramagnetic rotors. Executed in a high-density gas, it may offer an efficient way of generating macroscopic magnetic fields remotely (as shown in this work) and producing a large amount of spin-polarized electrons.

  2. Wasatch: An architecture-proof multiphysics development environment using a Domain Specific Language and graph theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Tony; Sutherland, James C.

    To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less

  3. Creative Cognition and Brain Network Dynamics

    PubMed Central

    Beaty, Roger E.; Benedek, Mathias; Silvia, Paul J.; Schacter, Daniel L.

    2015-01-01

    Creative thinking is central to the arts, sciences, and everyday life. How does the brain produce creative thought? A series of recently published papers has begun to provide insight into this question, reporting a strikingly similar pattern of brain activity and connectivity across a range of creative tasks and domains, from divergent thinking to poetry composition to musical improvisation. This research suggests that creative thought involves dynamic interactions of large-scale brain systems, with the most compelling finding being that the default and executive control networks, which can show an antagonistic relationship, actually cooperate during creative cognition and artistic performance. These findings have implications for understanding how brain networks interact to support complex cognitive processes, particularly those involving goal-directed, self-generated thought. PMID:26553223

  4. Wasatch: An architecture-proof multiphysics development environment using a Domain Specific Language and graph theory

    DOE PAGES

    Saad, Tony; Sutherland, James C.

    2016-05-04

    To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less

  5. Navajo-Hopi Land Commission Renewable Energy Development Project (NREP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas Benally, Deputy Director,

    2012-05-15

    The Navajo Hopi Land Commission Office (NHLCO), a Navajo Nation executive branch agency has conducted activities to determine capacity-building, institution-building, outreach and management activities to initiate the development of large-scale renewable energy - 100 megawatt (MW) or larger - generating projects on land in Northwestern New Mexico in the first year of a multi-year program. The Navajo Hopi Land Commission Renewable Energy Development Project (NREP) is a one year program that will develop and market a strategic business plan; form multi-agency and public-private project partnerships; compile site-specific solar, wind and infrastructure data; and develop and use project communication and marketingmore » tools to support outreach efforts targeting the public, vendors, investors and government audiences.« less

  6. Are "High Potential" Executives Capable of Building Learning-Oriented Organisations? Reflections on the French Case

    ERIC Educational Resources Information Center

    Belet, Daniel

    2007-01-01

    Purpose: The author's interest in learning organisation development leads him to examine large French companies' practices regarding "high potential" executives policies and to question their selection and development processes and their capabilities to develop learning oriented organisations.The author also tries to explain why most…

  7. 75 FR 42801 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... Organizations; International Securities Exchange, LLC; Notice of Filing and Immediate Effectiveness of Proposed... at or under the threshold are charged the constituent's prescribed execution fee. This waiver applies... members to execute large-sized FX options orders on the Exchange in a manner that is cost effective. The...

  8. MOD-5A wind turbine generator program design report: Volume 1: Executive Summary

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The design, development and analysis of the 7.3 MW MOD-5A wind turbine generator covering work performed between July 1980 and June 1984 is discussed. The report is divided into four volumes: Volume 1 summarizes the entire MOD-5A program, Volume 2 discusses the conceptual and preliminary design phases, Volume 3 describes the final design of the MOD-5A, and Volume 4 contains the drawings and specifications developed for the final design. Volume 1, the Executive Summary, summarizes all phases of the MOD-5A program. The performance and cost of energy generated by the MOD-5A are presented. Each subsystem - the rotor, drivetrain, nacelle, tower and foundation, power generation, and control and instrumentation subsystems - is described briefly. The early phases of the MOD-5A program, during which the design was analyzed and optimized, and new technologies and materials were developed, are discussed. Manufacturing, quality assurance, and safety plans are presented. The volume concludes with an index of volumes 2 and 3.

  9. The Socioeconomic Benefits Generated by 16 Community Colleges and Technical Institutes in Alberta. Executive Summary [and] Volume 1: Main Report.

    ERIC Educational Resources Information Center

    Christophersen, Kjell A.; Robison, M. Henry

    This document contains an executive summary and main report that examine the ways in which the Alberta, Canada, economy benefits from the presence of the 16 community and technical colleges in the province. The colleges served an unduplicated headcount of 241,992 students in fiscal year 2001. The Alberta community colleges employed 8,374 full-time…

  10. The Socioeconomic Benefits Generated by 15 Community College Districts in Mississippi. Volume 1: Main Report [and] Volume 2: Detailed Results [and] Executive Summary.

    ERIC Educational Resources Information Center

    Christophersen, Kjell A.; Robison, M. Henry

    This document contains an executive summary, main report, and detailed results by entry level of education, gender and ethnicity. The ways in which the State of Mississippi economy benefits from the presence of the 15 community college districts in the state are examined. The Mississippi community colleges employed 4,940 full- and part-time…

  11. The Socioeconomic Benefits Generated by 16 Community Colleges in Maryland. Executive Summary [and] Volume 1: Main Report [and] Volume 2: Detailed Results.

    ERIC Educational Resources Information Center

    Christophersen, Kjell A.; Robison, M. Henry

    This document contains an executive summary, main report, and detailed results by entry level of education, gender and ethnicity. The report examines the ways in which the State of Maryland economy benefits from the presence of the 16 community college districts in the state. Volume 1 is the Main Report, and Volume 2 includes detailed results. The…

  12. SCNS: a graphical tool for reconstructing executable regulatory networks from single-cell genomic data.

    PubMed

    Woodhouse, Steven; Piterman, Nir; Wintersteiger, Christoph M; Göttgens, Berthold; Fisher, Jasmin

    2018-05-25

    Reconstruction of executable mechanistic models from single-cell gene expression data represents a powerful approach to understanding developmental and disease processes. New ambitious efforts like the Human Cell Atlas will soon lead to an explosion of data with potential for uncovering and understanding the regulatory networks which underlie the behaviour of all human cells. In order to take advantage of this data, however, there is a need for general-purpose, user-friendly and efficient computational tools that can be readily used by biologists who do not have specialist computer science knowledge. The Single Cell Network Synthesis toolkit (SCNS) is a general-purpose computational tool for the reconstruction and analysis of executable models from single-cell gene expression data. Through a graphical user interface, SCNS takes single-cell qPCR or RNA-sequencing data taken across a time course, and searches for logical rules that drive transitions from early cell states towards late cell states. Because the resulting reconstructed models are executable, they can be used to make predictions about the effect of specific gene perturbations on the generation of specific lineages. SCNS should be of broad interest to the growing number of researchers working in single-cell genomics and will help further facilitate the generation of valuable mechanistic insights into developmental, homeostatic and disease processes.

  13. The default network and self-generated thought: component processes, dynamic control, and clinical relevance

    PubMed Central

    Andrews-Hanna, Jessica R.; Smallwood, Jonathan; Spreng, R. Nathan

    2014-01-01

    Though only a decade has elapsed since the default network was first emphasized as being a large-scale brain system, recent years have brought great insight into the network’s adaptive functions. A growing theme highlights the default network as playing a key role in internally-directed—or self-generated—thought. Here, we synthesize recent findings from cognitive science, neuroscience, and clinical psychology to focus attention on two emerging topics as current and future directions surrounding the default network. First, we present evidence that self-generated thought is a multi-faceted construct whose component processes are supported by different subsystems within the network. Second, we highlight the dynamic nature of the default network, emphasizing its interaction with executive control systems when regulating aspects of internal thought. We conclude by discussing clinical implications of disruptions to the integrity of the network, and consider disorders when thought content becomes polarized or network interactions become disrupted or imbalanced. PMID:24502540

  14. Summary of NASA-Lewis Research Center solar heating and cooling and wind energy programs

    NASA Technical Reports Server (NTRS)

    Vernon, R. W.

    1975-01-01

    NASA is planning to construct and operate a solar heating and cooling system in conjunction with a new office building being constructed at Langley Research Center. The technology support for this project will be provided by a solar energy program underway at NASA's Lewis Research Center. The solar program at Lewis includes: testing of solar collectors with a solar simulator, outdoor testing of collectors, property measurements of selective and nonselective coatings for solar collectors, and a solar model-systems test loop. NASA-Lewis has been assisting the National Science Foundation and now the Energy Research and Development Administration in planning and executing a national wind energy program. The areas of the wind energy program that are being conducted by Lewis include: design and operation of a 100 kW experimental wind generator, industry-designed and user-operated wind generators in the range of 50 to 3000 kW, and supporting research and technology for large wind energy systems. An overview of these activities is provided.

  15. PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.

    PubMed

    Klarner, Hannes; Streck, Adam; Siebert, Heike

    2017-03-01

    The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. A mechanism for efficient debugging of parallel programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, B.P.; Choi, J.D.

    1988-01-01

    This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less

  17. The chief information officer--capturing healthcare's rare bird.

    PubMed

    Krinsky, M L

    1986-08-01

    While we occasionally conducted MIS executive searches during the 1970s, the recent pace has quickened substantially. Healthcare corporations need the MIS executive or CIO to keep the organization technologically and managerially current. Downsizing of acute-care facilities, expansion of outpatient services and creation of new programs have put a premium on current, computer-generated data. Skilled managers must rely on an efficient, flexible data processing department to evaluate options and make decisions about corporate strategy and program development. A presentable, articulate, personable MIS executive is a key ingredient in a successful management team. The position will continue to grow in importance and prominence in the fast-changing healthcare delivery industry.

  18. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  19. Using Multivariate Base Rates to Interpret Low Scores on an Abbreviated Battery of the Delis-Kaplan Executive Function System.

    PubMed

    Karr, Justin E; Garcia-Barrera, Mauricio A; Holdnack, James A; Iverson, Grant L

    2017-05-01

    Executive function consists of multiple cognitive processes that operate as an interactive system to produce volitional goal-oriented behavior, governed in large part by frontal microstructural and physiological networks. Identification of deficits in executive function in those with neurological or psychiatric conditions can be difficult because the normal variation in executive function test scores, in healthy adults when multiple tests are used, is largely unknown. This study addresses that gap in the literature by examining the prevalence of low scores on a brief battery of executive function tests. The sample consisted of 1,050 healthy individuals (ages 16-89) from the standardization sample for the Delis-Kaplan Executive Function System (D-KEFS). Seven individual test scores from the Trail Making Test, Color-Word Interference Test, and Verbal Fluency Test were analyzed. Low test scores, as defined by commonly used clinical cut-offs (i.e., ≤25th, 16th, 9th, 5th, and 2nd percentiles), occurred commonly among the adult portion of the D-KEFS normative sample (e.g., 62.8% of the sample had one or more scores ≤16th percentile, 36.1% had one or more scores ≤5th percentile), and the prevalence of low scores increased with lower intelligence and fewer years of education. The multivariate base rates (BR) in this article allow clinicians to understand the normal frequency of low scores in the general population. By use of these BRs, clinicians and researchers can improve the accuracy with which they identify executive dysfunction in clinical groups, such as those with traumatic brain injury or neurodegenerative diseases. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  20. EEG signatures of arm isometric exertions in preparation, planning and execution.

    PubMed

    Nasseroleslami, Bahman; Lakany, Heba; Conway, Bernard A

    2014-04-15

    The electroencephalographic (EEG) activity patterns in humans during motor behaviour provide insight into normal motor control processes and for diagnostic and rehabilitation applications. While the patterns preceding brisk voluntary movements, and especially movement execution, are well described, there are few EEG studies that address the cortical activation patterns seen in isometric exertions and their planning. In this paper, we report on time and time-frequency EEG signatures in experiments in normal subjects (n=8), using multichannel EEG during motor preparation, planning and execution of directional centre-out arm isometric exertions performed at the wrist in the horizontal plane, in response to instruction-delay visual cues. Our observations suggest that isometric force exertions are accompanied by transient and sustained event-related potentials (ERP) and event-related (de-)synchronisations (ERD/ERS), comparable to those of a movement task. Furthermore, the ERPs and ERD/ERS are also observed during preparation and planning of the isometric task. Comparison of ear-lobe-referenced and surface Laplacian ERPs indicates the contribution of superficial sources in supplementary and pre-motor (FC(z)), parietal (CP(z)) and primary motor cortical areas (C₁ and FC₁) to ERPs (primarily negative peaks in frontal and positive peaks in parietal areas), but contribution of deep sources to sustained time-domain potentials (negativity in planning and positivity in execution). Transient and sustained ERD patterns in μ and β frequency bands of ear-lobe-referenced and surface Laplacian EEG indicate the contribution of both superficial and deep sources to ERD/ERS. As no physical displacement happens during the task, we can infer that the underlying mechanisms of motor-related ERPs and ERD/ERS patterns do not only depend on change in limb coordinate or muscle-length-dependent ascending sensory information and are primary generated by motor preparation, direction-dependent planning and execution of isometric motor tasks. The results contribute to our understanding of the functions of different brain regions during voluntary motor tasks and their activity signatures in EEG can shed light on the relationships between large-scale recordings such as EEG and other recordings such as single unit activity and fMRI in this context. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Performance Metrics for Monitoring Parallel Program Executions

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.

  2. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  3. View from the top: CEO perspectives on executive development and succession planning practices in healthcare organizations.

    PubMed

    Groves, Kevin S

    2006-01-01

    Many healthcare professionals question whether the industry's hospitals and multi-site systems are implementing the necessary executive development and succession planning systems to ensure that high potential managers are prepared and aptly selected to assume key executive roles. Survey data, case studies, and cross-industry comparisons suggest that healthcare organizations may face a leadership crisis as the current generation of chief executive officers (CEOs) nears retirement while traditional means of developing the leadership pipeline, including middle-management positions and graduate programs requiring formal residencies, continue to dissipate. Given the daunting challenges that accompany the healthcare industry's quest to identify, develop, and retain leadership talent, this article provides best practice findings from a qualitative study of 13 healthcare organizations with a record of exemplary executive development and succession planning practices. CEOs from six single-site hospitals, six healthcare systems, and one medical group were interviewed to identify industry best practices so that healthcare practitioners and educators may utilize the findings to enhance the industry's leadership capacity.

  4. Executive function and health-related quality of life in pediatric epilepsy.

    PubMed

    Schraegle, William A; Titus, Jeffrey B

    2016-09-01

    Children and adolescents with epilepsy often show higher rates of executive functioning deficits and are at an increased risk of diminished health-related quality of life (HRQOL). The purpose of the current study was to determine the extent to which executive dysfunction predicts HRQOL in youth with epilepsy. Data included parental ratings on the Behavior Rating Inventory of Executive Function (BRIEF) and the Quality of Life in Childhood Epilepsy (QOLCE) questionnaire for 130 children and adolescents with epilepsy (mean age=11years, 6months; SD=3years, 6months). Our results identified executive dysfunction in nearly half of the sample (49%). Moderate-to-large correlations were identified between the BRIEF and the QOLCE subscales of well-being, cognition, and behavior. The working memory subscale on the BRIEF emerged as the sole significant predictor of HRQOL. These results underscore the significant role of executive function in pediatric epilepsy. Proactive screening for executive dysfunction to identify those at risk of poor HRQOL is merited, and these results bring to question the potential role of behavioral interventions to improve HRQOL in pediatric epilepsy by specifically treating and/or accommodating for executive deficits. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Random Number Generation in Autism.

    ERIC Educational Resources Information Center

    Williams, Mark A.; Moss, Simon A.; Bradshaw, John L.; Rinehart, Nicole J.

    2002-01-01

    This study explored the ability of 14 individuals with autism to generate a unique series of digits. Individuals with autism were more likely to repeat previous digits than comparison individuals, suggesting they may exhibit a shortfall in response inhibition. Results support the executive dysfunction theory of autism. (Contains references.)…

  6. A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing

    NASA Technical Reports Server (NTRS)

    Takaki, Mitsuo; Cavalcanti, Diego; Gheyi, Rohit; Iyoda, Juliano; dAmorim, Marcelo; Prudencio, Ricardo

    2009-01-01

    The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compare the effectiveness of a symbolic solver (CVC3), a random solver, three hybrid solvers (i.e., mix of random and symbolic), and two heuristic search solvers. We evaluate the solvers on two benchmarks: one consisting of manually generated constraints and another generated with a concolic execution of 8 subjects. In addition to fully decidable constraints, the benchmarks include constraints with non-linear integer arithmetic, integer modulo and division, bitwise arithmetic, and floating-point arithmetic. As expected symbolic solving (in particular, CVC3) subsumes the other solvers for the concolic execution of subjects that only generate decidable constraints. For the remaining subjects the solvers are complementary.

  7. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  8. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  9. Prevalence of executive dysfunction in cocaine, heroin and alcohol users enrolled in therapeutic communities.

    PubMed

    Fernández-Serrano, María José; Pérez-García, Miguel; Perales, José C; Verdejo-García, Antonio

    2010-01-10

    Many studies have observed relevant executive alterations in polysubstance users but no data have been generated in terms of prevalence of these alterations. Studies of the prevalence of neuropsychological impairment can be useful in the design and implementations of interventional programs for substance abusers. The present study was conducted to estimate the prevalence of neuropsychological impairment in different components of executive functions in polysubstance users enrolled in therapeutic communities. Moreover, we estimated the effect size of the differences in the executive performance between polysubstance users and non substance users in order to know which neuropsychological tasks can be useful to detect alterations in the executive functions. Study results showed a high prevalence of executive function impairment in polysubstance users. Working memory was the component with the highest impairment proportion, followed by fluency, shifting, planning, multi-tasking and interference. Comparisons between user groups showed very similar executive impairment prevalence for all the analyzed executive components. The best discriminating task between users and controls was Arithmetic (Wechsler Adult Intelligence Scale, WAIS-III). Moreover FAS and Ruff Figural Fluency Test was discriminating for fluency, Category Test for shifting, Stroop Colour-Word Interference Test for interference, Zoo Map (Behavioural Assessment of the Dysexecutive Syndrome, BADS) for planning and Six Elements (BADS) for multi-tasking. The existence of significant prevalence of executive impairment in polysubstance users reveals the need to redirect the actuation policies in the field of drug-dependency towards the creation of treatments addressed at the executive deficits of the participants, which in turn would facilitate the individuals' compliance and final rehabilitation.

  10. Enabling Graph Appliance for Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun

    2015-01-01

    In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less

  11. Automated Construction of Node Software Using Attributes in a Ubiquitous Sensor Network Environment

    PubMed Central

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric—the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment. PMID:22163678

  12. Automated construction of node software using attributes in a ubiquitous sensor network environment.

    PubMed

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric-the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment.

  13. Foot force direction control during a pedaling task in individuals post-stroke

    PubMed Central

    2014-01-01

    Background Appropriate magnitude and directional control of foot-forces is required for successful execution of locomotor tasks. Earlier evidence suggested, following stroke, there is a potential impairment in foot-force control capabilities both during stationary force generation and locomotion. The purpose of this study was to investigate the foot-pedal surface interaction force components, in non-neurologically-impaired and stroke-impaired individuals, in order to determine how fore/aft shear-directed foot/pedal forces are controlled. Methods Sixteen individuals with chronic post-stroke hemiplegia and 10 age-similar non-neurologically-impaired controls performed a foot placement maintenance task under a stationary and a pedaling condition, achieving a target normal pedal force. Electromyography and force profiles were recorded. We expected generation of unduly large magnitude shear pedal forces and reduced participation of multiple muscles that can contribute forces in appropriate directions in individuals post-stroke. Results We found lower force output, inconsistent modulation of muscle activity and reduced ability to change foot force direction in the paretic limbs, but we did not observe unduly large magnitude shear pedal surface forces by the paretic limbs as we hypothesized. Conclusion These findings suggested the preservation of foot-force control capabilities post-stroke under minimal upright postural control requirements. Further research must be conducted to determine whether inappropriate shear force generation will be revealed under non-seated, postural demanding conditions, where subjects have to actively control for upright body suspension. PMID:24739234

  14. Partners for Learning, Not Funding

    ERIC Educational Resources Information Center

    Alba, Guy D.

    2012-01-01

    During the author's first years as a teacher, he took a part-time job to make ends meet. As an exercise instructor in a large insurance company's corporate fitness center, he worked with a wide range of employees, including the top executives. Some of those executives asked how their company could help the school. Instead of asking for a monetary…

  15. Understanding and Mitigating Protests of Department of Defense Acquisition Contracts

    DTIC Science & Technology

    2010-08-01

    of delivery time that can lock out a rejected offeror from a market . Sixth, more complex contracts, like services versus products , generate more...The engineers, attorneys, or head of a business unit need to explain to the team that spent time working on a bid why the company lost. Executives...agency executives have to explain to their team, who also spent time working on the source solicitation, evaluation, and selection, why the company

  16. The Socioeconomic Benefits Generated by 14 Community College Districts in Oklahoma. Executive Summary [and] Volume 1: Main Report [and] Volume 2: Detailed Results.

    ERIC Educational Resources Information Center

    Christophersen, Kjell A.; Robison, M. Henry

    This document contains and executive summary, main report, and detailed results by entry level of education, gender and ethnicity. The parts of this document examine the ways in which the State of Oklahoma economy benefits from the presence of the 14 community college districts in the state. The colleges serve an unduplicated headcount of 106,201…

  17. Impacts of a prekindergarten program on children's mathematics, language, literacy, executive function, and emotional skills.

    PubMed

    Weiland, Christina; Yoshikawa, Hirokazu

    2013-01-01

    Publicly funded prekindergarten programs have achieved small-to-large impacts on children's cognitive outcomes. The current study examined the impact of a prekindergarten program that implemented a coaching system and consistent literacy, language, and mathematics curricula on these and other nontargeted, essential components of school readiness, such as executive functioning. Participants included 2,018 four and five-year-old children. Findings indicated that the program had moderate-to-large impacts on children's language, literacy, numeracy and mathematics skills, and small impacts on children's executive functioning and a measure of emotion recognition. Some impacts were considerably larger for some subgroups. For urban public school districts, results inform important programmatic decisions. For policy makers, results confirm that prekindergarten programs can improve educationally vital outcomes for children in meaningful, important ways. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  18. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  19. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  20. Learning a Foreign Language: A New Path to Enhancement of Cognitive Functions.

    PubMed

    Shoghi Javan, Sara; Ghonsooly, Behzad

    2018-02-01

    The complicated cognitive processes involved in natural (primary) bilingualism lead to significant cognitive development. Executive functions as a fundamental component of human cognition are deemed to be affected by language learning. To date, a large number of studies have investigated how natural (primary) bilingualism influences executive functions; however, the way acquired (secondary) bilingualism manipulates executive functions is poorly understood. To fill this gap, controlling for age, gender, IQ, and socio-economic status, the researchers compared 60 advanced learners of English as a foreign language (EFL) to 60 beginners on measures of executive functions involving Stroop, Wisconsin Card Sorting Task (WCST) and Wechsler's digit span tasks. The results suggested that mastering English as a foreign language causes considerable enhancement in two components of executive functions, namely cognitive flexibility and working memory. However, no significant difference was observed in inhibitory control between the advanced EFL learners and beginners.

  1. Model-based Executive Control through Reactive Planning for Autonomous Rovers

    NASA Technical Reports Server (NTRS)

    Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola

    2004-01-01

    This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.

  2. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  3. Neuropsychological function and memory suppression in conversion disorder.

    PubMed

    Brown, Laura B; Nicholson, Timothy R; Aybek, Selma; Kanaan, Richard A; David, Anthony S

    2014-09-01

    Conversion disorder (CD) is a condition where neurological symptoms, such as weakness or sensory disturbance, are unexplained by neurological disease and are presumed to be of psychological origin. Contemporary theories of the disorder generally propose dysfunctional frontal control of the motor or sensory systems. Classical (Freudian) psychodynamic theory holds that the memory of stressful life events is repressed. Little is known about the frontal (executive) function of these patients, or indeed their general neuropsychological profile, and psychodynamic theories have been largely untested. This study aimed to investigate neuropsychological functioning in patients with CD, focusing on executive and memory function. A directed forgetting task (DFT) using words with variable emotional valence was also used to investigate memory suppression. 21 patients and 36 healthy controls completed a battery of neuropsychological tests and patients had deficits in executive function and auditory-verbal (but not autobiographical) memory. The executive deficits were largely driven by differences in IQ, anxiety and mood between the groups. A subgroup of 11 patients and 28 controls completed the DFT and whilst patients recalled fewer words overall than controls, there were no significant effects of directed forgetting or valence. This study provides some limited support for deficits in executive, and to a lesser degree, memory function in patients with CD, but did not find evidence of altered memory suppression to support the psychodynamic theory of repression. © 2013 The British Psychological Society.

  4. Use of a remote clinical decision support service for a multicenter trial to implement prediction rules for children with minor blunt head trauma.

    PubMed

    Goldberg, Howard S; Paterno, Marilyn D; Grundmeier, Robert W; Rocha, Beatriz H; Hoffman, Jeffrey M; Tham, Eric; Swietlik, Marguerite; Schaeffer, Molly H; Pabbathi, Deepika; Deakyne, Sara J; Kuppermann, Nathan; Dayan, Peter S

    2016-03-01

    To evaluate the architecture, integration requirements, and execution characteristics of a remote clinical decision support (CDS) service used in a multicenter clinical trial. The trial tested the efficacy of implementing brain injury prediction rules for children with minor blunt head trauma. We integrated the Epic(®) electronic health record (EHR) with the Enterprise Clinical Rules Service (ECRS), a web-based CDS service, at two emergency departments. Patterns of CDS review included either a delayed, near-real-time review, where the physician viewed CDS recommendations generated by the nursing assessment, or a real-time review, where the physician viewed recommendations generated by their own documentation. A backstopping, vendor-based CDS triggered with zero delay when no recommendation was available in the EHR from the web-service. We assessed the execution characteristics of the integrated system and the source of the generated recommendations viewed by physicians. The ECRS mean execution time was 0.74 ±0.72 s. Overall execution time was substantially different at the two sites, with mean total transaction times of 19.67 and 3.99 s. Of 1930 analyzed transactions from the two sites, 60% (310/521) of all physician documentation-initiated recommendations and 99% (1390/1409) of all nurse documentation-initiated recommendations originated from the remote web service. The remote CDS system was the source of recommendations in more than half of the real-time cases and virtually all the near-real-time cases. Comparisons are limited by allowable variation in user workflow and resolution of the EHR clock. With maturation and adoption of standards for CDS services, remote CDS shows promise to decrease time-to-trial for multicenter evaluations of candidate decision support interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Effect of confinements: Bending in Paramecium

    NASA Astrophysics Data System (ADS)

    Eddins, Aja; Yang, Sung; Spoon, Corrie; Jung, Sunghwan

    2012-02-01

    Paramecium is a unicellular eukaryote which by coordinated beating of cilia, generates metachronal waves which causes it to execute a helical trajectory. We investigate the swimming parameters of the organism in rectangular PDMS channels and try to quantify its behavior. Surprisingly a swimming Paramecium in certain width of channels executes a bend of its flexible body (and changes its direction of swimming) by generating forces using the cilia. Considering a simple model of beam constrained between two walls, we predict the bent shapes of the organism and the forces it exerts on the walls. Finally we try to explain how bending (by sensing) can occur in channels by conducting experiments in thin film of fluid and drawing analogy to swimming behavior observed in different cases.

  6. Autonomously generating operations sequences for a Mars Rover using AI-based planning

    NASA Technical Reports Server (NTRS)

    Sherwood, Rob; Mishkin, Andrew; Estlin, Tara; Chien, Steve; Backes, Paul; Cooper, Brian; Maxwell, Scott; Rabideau, Gregg

    2001-01-01

    This paper discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from highlevel science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This Artificial Intelligence (AI) based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules.

  7. Semantic processing of EHR data for clinical research.

    PubMed

    Sun, Hong; Depraetere, Kristof; De Roo, Jos; Mels, Giovanni; De Vloed, Boris; Twagirumukiza, Marc; Colaert, Dirk

    2015-12-01

    There is a growing need to semantically process and integrate clinical data from different sources for clinical research. This paper presents an approach to integrate EHRs from heterogeneous resources and generate integrated data in different data formats or semantics to support various clinical research applications. The proposed approach builds semantic data virtualization layers on top of data sources, which generate data in the requested semantics or formats on demand. This approach avoids upfront dumping to and synchronizing of the data with various representations. Data from different EHR systems are first mapped to RDF data with source semantics, and then converted to representations with harmonized domain semantics where domain ontologies and terminologies are used to improve reusability. It is also possible to further convert data to application semantics and store the converted results in clinical research databases, e.g. i2b2, OMOP, to support different clinical research settings. Semantic conversions between different representations are explicitly expressed using N3 rules and executed by an N3 Reasoner (EYE), which can also generate proofs of the conversion processes. The solution presented in this paper has been applied to real-world applications that process large scale EHR data. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Data preprocessing for determining outer/inner parallelization in the nested loop problem using OpenMP

    NASA Astrophysics Data System (ADS)

    Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.

    2017-07-01

    Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.

  9. Software for Automation of Real-Time Agents, Version 2

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg; hide

    2005-01-01

    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.

  10. Execution time supports for adaptive scientific algorithms on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  11. Execution time support for scientific programs on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  12. Evaluation of a Pair-Wise Conflict Detection and Resolution Algorithm in a Multiple Aircraft Scenario

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2002-01-01

    The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.

  13. Post-game analysis: An initial experiment for heuristic-based resource management in concurrent systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.

    1987-01-01

    In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.

  14. Intelligent sensor and controller framework for the power grid

    DOEpatents

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen; Tews, Cody William; Kulkarni, Anand V.; Carpenter, Brandon J.; Maiden, Wendy M.; Ciraci, Selim

    2015-07-28

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with the software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, Jonathan L.; Tuffner, Francis K.; Hadley, Mark D.

    This document contains the Integrated Assessment Plan (IAP) for the Phase 2 Operational Demonstration (OD) of the Smart Power Infrastructure Demonstration for Energy Reliability (SPIDERS) Joint Capability Technology Demonstration (JCTD) project. SPIDERS will be conducted over a three year period with Phase 2 being conducted at Fort Carson, Colorado. This document includes the Operational Demonstration Execution Plan (ODEP) and the Operational Assessment Execution Plan (OAEP), as approved by the Operational Manager (OM) and the Integrated Management Team (IMT). The ODEP describes the process by which the OD is conducted and the OAEP describes the process by which the data collectedmore » from the OD is processed. The execution of the OD, in accordance with the ODEP and the subsequent execution of the OAEP, will generate the necessary data for the Quick Look Report (QLR) and the Utility Assessment Report (UAR). These reports will assess the ability of the SPIDERS JCTD to meet the four critical requirements listed in the Implementation Directive (ID).« less

  16. Intelligent sensor and controller framework for the power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with themore » software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.« less

  17. Autonomous Attitude Determination System (AADS). Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Saralkar, K.; Frenkel, Y.; Klitsch, G.; Liu, K. S.; Lefferts, E.; Tasaki, K.; Snow, F.; Garrahan, J.

    1982-01-01

    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution.

  18. Executive Search Firms' Consideration of Person-Organization Fit in College and University Presidential Searches

    ERIC Educational Resources Information Center

    Turpin, James Christopher

    2013-01-01

    Largely what is known about P-O Fit stems from research conducted in business organizations. Surprisingly with such an important position as a college or university president, P-O Fit has not been empirically studied in the presidential selection process, much less from the perspective of the executive search firms that conduct these searches.…

  19. Demographic and Familial Predictors of Early Executive Function Development: Contribution of a Person-Centered Perspective

    ERIC Educational Resources Information Center

    Rhoades, Brittany L.; Greenberg, Mark T.; Lanza, Stephanie T.; Blair, Clancy

    2011-01-01

    Executive function (EF) skills are integral components of young children's growing competence, but little is known about the role of early family context and experiences in their development. We examined how demographic and familial risks during infancy predicted EF competence at 36 months of age in a large, predominantly low-income sample of…

  20. A Case Study in Design Thinking Applied Through Aviation Mission Support Tactical Advancements for the Next Generation (TANG)

    DTIC Science & Technology

    2017-12-01

    This is an examination of the research, execution, and follow- on developments supporting the Design Thinking event explored through Case Study ...research, execution, and follow- on developments supporting the Design Thinking event explored through case study methods. Additionally, the lenses of...total there have been two Naval Postgraduate School (NPS) case study theses on U.S. Navy innovation events as well as other works examining the

  1. Dynamically programmable cache

    NASA Astrophysics Data System (ADS)

    Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas

    1998-10-01

    Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).

  2. Evidence for selective executive function deficits in ecstasy/polydrug users.

    PubMed

    Fisk, J E; Montgomery, C

    2009-01-01

    Previous research has suggested that the separate aspects of executive functioning are differentially affected by ecstasy use. Although the inhibition process appears to be unaffected by ecstasy use, it is unclear whether this is true of heavy users under conditions of high demand. Tasks loading on the updating process have been shown to be adversely affected by ecstasy use. However, it remains unclear whether the deficits observed reflect the executive aspects of the tasks or whether they are domain general in nature affecting both verbal and visuo-spatial updating. Fourteen heavy ecstasy users (mean total lifetime use 1000 tablets), 39 light ecstasy users (mean total lifetime use 150 tablets) and 28 non-users were tested on tasks loading on the inhibition executive process (random letter generation) and the updating component process (letter updating, visuo-spatial updating and computation span). Heavy users were not impaired in random letter generation even under conditions designed to be more demanding. Ecstasy-related deficits were observed on all updating measures and were statistically significant for two of the three measures. Following controls for various aspects of cannabis use, statistically significant ecstasy-related deficits were obtained on all three updating measures. It was concluded that the inhibition process is unaffected by ecstasy use even among heavy users. By way of contrast, the updating process appears to be impaired in ecstasy users with the deficit apparently domain general in nature.

  3. Ethical issues in purchasing: a field study of Midwest hospitals.

    PubMed

    Tomaszewski, K; Motwani, J

    1995-01-01

    A large sum of money is spent annually by salespeople on gifts and favors for purchasing executives. The provision of gifts and favors to buyers remains a common practice despite the fact that it often leads to ethical conflicts for purchasing executives, sales managers, and salespeople. This paper investigates the perceptions of 51 purchasing executives of midwest hospitals regarding their behavior towards certain buying practices, the favors offered by vendors, favors actually accepted, as well as purchasers' discomfort and repayment levels regarding indebtedness. Based on the data analysis, this paper provides conclusions and directions for future research.

  4. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  5. Creative Cognition and Brain Network Dynamics.

    PubMed

    Beaty, Roger E; Benedek, Mathias; Silvia, Paul J; Schacter, Daniel L

    2016-02-01

    Creative thinking is central to the arts, sciences, and everyday life. How does the brain produce creative thought? A series of recently published papers has begun to provide insight into this question, reporting a strikingly similar pattern of brain activity and connectivity across a range of creative tasks and domains, from divergent thinking to poetry composition to musical improvisation. This research suggests that creative thought involves dynamic interactions of large-scale brain systems, with the most compelling finding being that the default and executive control networks, which can show an antagonistic relation, tend to cooperate during creative cognition and artistic performance. These findings have implications for understanding how brain networks interact to support complex cognitive processes, particularly those involving goal-directed, self-generated thought. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  7. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  8. Generation X and Objectionable Advertising: A Q-Sort of Senior Advertising Students' Attitudes toward Objectionable Advertising.

    ERIC Educational Resources Information Center

    Yssel, Johan C.; And Others

    A study investigated what a group of 29 senior advertising students, part of "Generation X," at a midwestern university found "objectionable" in 35 selected contemporary magazine advertising executions. Using a Q-sort, students ranked the advertisements and completed a personal interview. The majority of the advertisements that…

  9. Febrile Seizures and Behavioural and Cognitive Outcomes in Preschool Children: The Generation R Study

    ERIC Educational Resources Information Center

    Visser, Annemarie M.; Jaddoe, Vincent W. V.; Ghassabian, Akhgar; Schenk, Jacqueline J.; Verhulst, Frank C.; Hofman, Albert; Tiemeier, Henning; Moll, Henriette A.; Arts, Willem Frans M.

    2012-01-01

    Aim: General developmental outcome is known to be good in school-aged children who experienced febrile seizures. We examined cognitive and behavioural outcomes in preschool children with febrile seizures, including language and executive functioning outcomes. Method: This work was performed in the Generation R Study, a population-based cohort…

  10. Generation of novel motor sequences: the neural correlates of musical improvisation.

    PubMed

    Berkowitz, Aaron L; Ansari, Daniel

    2008-06-01

    While some motor behavior is instinctive and stereotyped or learned and re-executed, much action is a spontaneous response to a novel set of environmental conditions. The neural correlates of both pre-learned and cued motor sequences have been previously studied, but novel motor behavior has thus far not been examined through brain imaging. In this paper, we report a study of musical improvisation in trained pianists with functional magnetic resonance imaging (fMRI), using improvisation as a case study of novel action generation. We demonstrate that both rhythmic (temporal) and melodic (ordinal) motor sequence creation modulate activity in a network of brain regions comprised of the dorsal premotor cortex, the rostral cingulate zone of the anterior cingulate cortex, and the inferior frontal gyrus. These findings are consistent with a role for the dorsal premotor cortex in movement coordination, the rostral cingulate zone in voluntary selection, and the inferior frontal gyrus in sequence generation. Thus, the invention of novel motor sequences in musical improvisation recruits a network of brain regions coordinated to generate possible sequences, select among them, and execute the decided-upon sequence.

  11. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  12. The relationships of 'ecstasy' (MDMA) and cannabis use to impaired executive inhibition and access to semantic long-term memory.

    PubMed

    Murphy, Philip N; Erwin, Philip G; Maciver, Linda; Fisk, John E; Larkin, Derek; Wareing, Michelle; Montgomery, Catharine; Hilton, Joanne; Tames, Frank J; Bradley, Belinda; Yanulevitch, Kate; Ralley, Richard

    2011-10-01

    This study aimed to examine the relationship between the consumption of ecstasy (3,4-methylenedioxymethamphetamine (MDMA)) and cannabis, and performance on the random letter generation task which generates dependent variables drawing upon executive inhibition and access to semantic long-term memory (LTM). The participant group was a between-participant independent variable with users of both ecstasy and cannabis (E/C group, n = 15), users of cannabis but not ecstasy (CA group, n = 13) and controls with no exposure to these drugs (CO group, n = 12). Dependent variables measured violations of randomness: number of repeat sequences, number of alphabetical sequences (both drawing upon inhibition) and redundancy (drawing upon access to semantic LTM). E/C participants showed significantly higher redundancy than CO participants but did not differ from CA participants. There were no significant effects for the other dependent variables. A regression model comprising intelligence measures and estimates of ecstasy and cannabis consumption predicted redundancy scores, but only cannabis consumption contributed significantly to this prediction. Impaired access to semantic LTM may be related to cannabis consumption, although the involvement of ecstasy and other stimulant drugs cannot be excluded here. Executive inhibitory functioning, as measured by the random letter generation task, is unrelated to ecstasy and cannabis consumption. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Intraindividual differences in executive functions during childhood: the role of emotions.

    PubMed

    Pnevmatikos, Dimitris; Trikkaliotis, Ioannis

    2013-06-01

    Intraindividual differences in executive functions (EFs) have been rarely investigated. In this study, we addressed the question of whether the emotional fluctuations that schoolchildren experience in their classroom settings could generate substantial intraindividual differences in their EFs and, more specifically, in the fundamental unifying component of EFs, their inhibition function. We designed an experimental research with ecological validity within the school setting where schoolchildren of three age groups (8-, 10-, and 12-year-olds) were involved. We executed three experiments. In Experiment 1, using a between-participants design, we isolated a classroom episode that, compared with the other episodes, generated significant differences in inhibitory function in a consequent Go/NoGo task. This was an episode that induced frustration after the experience of anxiety due to the uncertainty. Experiment 2, using a within-participants design, confirmed both the induced emotions from the episode and the intraindividual variability in schoolchildren's inhibition accuracy in the consequent Go/NoGo task. Experiment 3, again using a within-participants design, examined whether the same episode could generate intraindividual differences in a more demanding inhibition task, namely the anti-saccade task. The experiment confirmed the previous evidence; the episode generated high variability that in some age groups accounted for more than 1.5 standard deviations from the interindividual variability between the schoolchildren of the same age. Results showed that, regardless of their sex and the developmental progression in their inhibition with age, the variability induced within participants from the experienced frustration was very high compared with the interindividual variability of the same age group. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  15. MAGI: many-component galaxy initializer

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2018-04-01

    Providing initial conditions is an essential procedure for numerical simulations of galaxies. The initial conditions for idealized individual galaxies in N-body simulations should resemble observed galaxies and be dynamically stable for time-scales much longer than their characteristic dynamical times. However, generating a galaxy model ab initio as a system in dynamical equilibrium is a difficult task, since a galaxy contains several components, including a bulge, disc, and halo. Moreover, it is desirable that the initial-condition generator be fast and easy to use. We have now developed an initial-condition generator for galactic N-body simulations that satisfies these requirements. The developed generator adopts a distribution-function-based method, and it supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc. We tested the dynamical stability of systems generated by our code, representing early- and late-type galaxies, with N = 2097 152 and 8388 608 particles, respectively, and we found that the model galaxies maintain their initial distributions for at least 1 Gyr. The execution times required to generate the two models were 8.5 and 221.7 seconds, respectively, which is negligible compared to typical execution times for N-body simulations. The code is provided as open-source software and is publicly and freely available at https://bitbucket.org/ymiki/magi.

  16. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    NASA Astrophysics Data System (ADS)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  17. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  18. Hyperactivity in boys with attention-deficit/hyperactivity disorder (ADHD): The role of executive and non-executive functions.

    PubMed

    Hudec, Kristen L; Alderson, R Matt; Patros, Connor H G; Lea, Sarah E; Tarle, Stephanie J; Kasper, Lisa J

    2015-01-01

    Motor activity of boys (age 8-12 years) with (n=19) and without (n=18) ADHD was objectively measured with actigraphy across experimental conditions that varied with regard to demands on executive functions. Activity exhibited during two n-back (1-back, 2-back) working memory tasks was compared to activity during a choice-reaction time (CRT) task that placed relatively fewer demands on executive processes and during a simple reaction time (SRT) task that required mostly automatic processing with minimal executive demands. Results indicated that children in the ADHD group exhibited greater activity compared to children in the non-ADHD group. Further, both groups exhibited the greatest activity during conditions with high working memory demands, followed by the reaction time and control task conditions, respectively. The findings indicate that large-magnitude increases in motor activity are predominantly associated with increased demands on working memory, though demands on non-executive processes are sufficient to elicit small to moderate increases in motor activity as well. Published by Elsevier Ltd.

  19. Information Superiority generated through proper application of Geoinformatics

    NASA Astrophysics Data System (ADS)

    Teichmann, F.

    2012-04-01

    Information Superiority generated through proper application of Geoinformatics Information management and especially geoscience information delivery is a very delicate task. If it is carried out successfully, geoscientific data will provide the main foundation of Information Superiority. However, improper implementation of geodata generation, assimilation, distribution or storage will not only waste valuable resources like manpower or money, but could also give rise to crucial deficiency in knowledge and might lead to potentially extremely harmful disasters or wrong decisions. Comprehensive Approach, Effect Based Operations and Network Enabled Capabilities are the current buzz terms in the security regime. However, they also apply to various interdisciplinary tasks like catastrophe relief missions, civil task operations or even in day to day business operations where geo-science data is used. Based on experience in the application of geoscience data for defence applications the following procedure or tool box for generating geodata should lead to the desired information superiority: 1. Understand and analyse the mission, the task and the environment for which the geodata is needed 2. Carry out a Information Exchange Requirement between the user or customer and the geodata provider 3. Implementation of current interoperability standards and a coherent metadata structure 4. Execute innovative data generation, data provision, data assimilation and data storage 5. Apply a cost-effective and reasonable data life cycle 6. Implement IT security by focusing of the three pillar concepts Integrity, Availability and Confidentiality of the critical data 7. Draft and execute a service level agreement or a memorandum of understanding between the involved parties 8. Execute a Continuous Improvement Cycle These ideas from the IT world should be transferred into the geoscience community and applied in a wide set of scenarios. A standardized approach of how to generate, provide, handle, distribute and store geodata will can reduce costs, strengthen the ties between service costumer and geodata provider and improve the contribution geoscience can make for achieving information superiority for decision makers.

  20. Integrating planning and reactive control

    NASA Technical Reports Server (NTRS)

    Wilkins, David E.; Myers, Karen L.

    1994-01-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  1. Integrating planning and reactive control

    NASA Astrophysics Data System (ADS)

    Wilkins, David E.; Myers, Karen L.

    1994-10-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  2. Individual differences in control of language interference in late bilinguals are mainly related to general executive abilities

    PubMed Central

    2010-01-01

    Background Recent research based on comparisons between bilinguals and monolinguals postulates that bilingualism enhances cognitive control functions, because the parallel activation of languages necessitates control of interference. In a novel approach we investigated two groups of bilinguals, distinguished by their susceptibility to cross-language interference, asking whether bilinguals with strong language control abilities ("non-switchers") have an advantage in executive functions (inhibition of irrelevant information, problem solving, planning efficiency, generative fluency and self-monitoring) compared to those bilinguals showing weaker language control abilities ("switchers"). Methods 29 late bilinguals (21 women) were evaluated using various cognitive control neuropsychological tests [e.g., Tower of Hanoi, Ruff Figural Fluency Task, Divided Attention, Go/noGo] tapping executive functions as well as four subtests of the Wechsler Adult Intelligence Scale. The analysis involved t-tests (two independent samples). Non-switchers (n = 16) were distinguished from switchers (n = 13) by their performance observed in a bilingual picture-naming task. Results The non-switcher group demonstrated a better performance on the Tower of Hanoi and Ruff Figural Fluency task, faster reaction time in a Go/noGo and Divided Attention task, and produced significantly fewer errors in the Tower of Hanoi, Go/noGo, and Divided Attention tasks when compared to the switchers. Non-switchers performed significantly better on two verbal subtests of the Wechsler Adult Intelligence Scale (Information and Similarity), but not on the Performance subtests (Picture Completion, Block Design). Conclusions The present results suggest that bilinguals with stronger language control have indeed a cognitive advantage in the administered tests involving executive functions, in particular inhibition, self-monitoring, problem solving, and generative fluency, and in two of the intelligence tests. What remains unclear is the direction of the relationship between executive functions and language control abilities. PMID:20180956

  3. The new road to the top.

    PubMed

    Cappelli, Peter; Hamori, Monika

    2005-01-01

    By comparing the top executives of 1980's Fortune 100 companies with the top brass of firms in the 2001 list, the authors have quantified a transformation that until now has been largely anecdotal. A dramatic shift in executive careers, and in executives themselves, has occurred over the past two decades. Today's Fortune 100 executives are younger, more of them are female, and fewer were educated at elite institutions. They're also making their way to the top more quickly. They're taking fewer jobs along the way, and they increasingly move from one company to the next as their careers unfold. In their wide-ranging analysis,the authors offer a number of insights. For one thing, it has become clear that there are huge advantages to working in a growing firm. For another, the firms that have been big for a long time still provide the most extensive training and development. They also offer relatively long promotion ladders--hence the common wisdom that these "academy companies" are great to have been from. While women were disproportionately scarce among the most senior ranks of executives in 2001, those who arrived got there faster and at a younger age than their male colleagues. Perhaps the career hurdles that women face had blocked all but the most highly qualified female managers, who then proceeded to rise quickly. In the future, a record of good P&L performance may become even more critical to getting hired and advancing in the largest companies. As a result, we may see a reversal of the usual flow of talent, which has been from the academy companies to smaller firms. It may be increasingly common for executives to develop records of performance in small companies, or even as entrepreneurs, and then seek positions in large corporations.

  4. Fracture Mechanics Analysis of Single and Double Rows of Fastener Holes Loaded in Bearing

    DTIC Science & Technology

    1976-04-01

    the following subprograms for execution: 1. ASRL FEABL-2 subroutines ASMLTV, ASMSUB, BCON, FACT, ORK, QBACK, SETUP, SIMULQ, STACON, and XTRACT. 2. IBM ...based on program code generated by IBM FORTRAN-G1 and FORTRAN-H compilers, with demonstration runs made on an IBM 370/168 computer. Programs SROW and...DROW are supplied ready to execute on systems with IBM -standard FORTRAN unit members for the card reader (unit 5) and line printer (unit 6). The

  5. Creating and Implementing an Offshore Graduate Program: A Case Study of Leadership and Development of the Global Executive MBA Program

    ERIC Educational Resources Information Center

    Herrera, Marisa L.

    2013-01-01

    This study applies the literature on leadership framing to the globalization of higher education to understand the development of the Global Executive MBA program at a large university. The purpose of the study was to provide administrators, educators and university leaders an understanding as to how to respond to globalization and, secondly, to…

  6. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  7. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  8. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  9. Scalable Cloning on Large-Scale GPU Platforms with Application to Time-Stepped Simulations on Grids

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kalyan S.

    2018-01-31

    Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. In this paper, we present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a largemore » parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. Finally, the results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.« less

  10. Aeromechanical stability analysis of COPTER

    NASA Technical Reports Server (NTRS)

    Yin, Sheng K.; Yen, Jing G.

    1988-01-01

    A plan was formed for developing a comprehensive, second-generation system with analytical capabilities for predicting performance, loads and vibration, handling qualities, aeromechanical stability, and acoustics. This second-generation system named COPTER (COmprehensive Program for Theoretical Evaluation of Rotorcraft) is designed for operational efficiency, user friendliness, coding readability, maintainability, transportability, modularity, and expandability for future growth. The system is divided into an executive, a data deck validator, and a technology complex. At present a simple executive, the data deck validator, and the aeromechanical stability module of the technology complex were implemented. The system is described briefly, the implementation of the technology module is discussed, and correlation data presented. The correlation includes hingeless-rotor isolated stability, hingeless-rotor ground-resonance stability, and air-resonance stability of an advanced bearingless-rotor in forward flight.

  11. Compiling global name-space programs for distributed execution

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush

    1990-01-01

    Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.

  12. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  13. IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications

    NASA Technical Reports Server (NTRS)

    Plaunt, Christian; Rajan, Kanna

    2004-01-01

    The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.

  14. Associations among false belief understanding, counterfactual reasoning, and executive function.

    PubMed

    Guajardo, Nicole R; Parker, Jessica; Turley-Ames, Kandi

    2009-09-01

    The primary purposes of the present study were to clarify previous work on the association between counterfactual thinking and false belief performance to determine (1) whether these two variables are related and (2) if so, whether executive function skills mediate the relationship. A total of 92 3-, 4-, and 5-year-olds completed false belief, counterfactual, working memory, representational flexibility, and language measures. Counterfactual reasoning accounted for limited unique variance in false belief. Both working memory and representational flexibility partially mediated the relationship between counterfactual and false belief. Children, like adults, also generated various types of counterfactual statements to differing degrees. Results demonstrated the importance of language and executive function for both counterfactual and false belief. Implications are discussed.

  15. Global investigation of protein-protein interactions in yeast Saccharomyces cerevisiae using re-occurring short polypeptide sequences.

    PubMed

    Pitre, S; North, C; Alamgir, M; Jessulat, M; Chan, A; Luo, X; Green, J R; Dumontier, M; Dehne, F; Golshani, A

    2008-08-01

    Protein-protein interaction (PPI) maps provide insight into cellular biology and have received considerable attention in the post-genomic era. While large-scale experimental approaches have generated large collections of experimentally determined PPIs, technical limitations preclude certain PPIs from detection. Recently, we demonstrated that yeast PPIs can be computationally predicted using re-occurring short polypeptide sequences between known interacting protein pairs. However, the computational requirements and low specificity made this method unsuitable for large-scale investigations. Here, we report an improved approach, which exhibits a specificity of approximately 99.95% and executes 16,000 times faster. Importantly, we report the first all-to-all sequence-based computational screen of PPIs in yeast, Saccharomyces cerevisiae in which we identify 29,589 high confidence interactions of approximately 2 x 10(7) possible pairs. Of these, 14,438 PPIs have not been previously reported and may represent novel interactions. In particular, these results reveal a richer set of membrane protein interactions, not readily amenable to experimental investigations. From the novel PPIs, a novel putative protein complex comprised largely of membrane proteins was revealed. In addition, two novel gene functions were predicted and experimentally confirmed to affect the efficiency of non-homologous end-joining, providing further support for the usefulness of the identified PPIs in biological investigations.

  16. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  17. Autonomy Architectures for a Constellation of Spacecraft

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2000-01-01

    Until the past few years, missions typically involved fairly large expensive spacecraft. Such missions have primarily favored using older proven technologies over more recently developed ones, and humans controlled spacecraft by manually generating detailed command sequences with low-level tools and then transmitting the sequences for subsequent execution on a spacecraft controller. This approach toward controlling a spacecraft has worked spectacularly on previous missions, but it has limitations deriving from communications restrictions - scheduling time to communicate with a particular spacecraft involves competing with other projects due to the limited number of deep space network antennae. This implies that a spacecraft can spend a long time just waiting whenever a command sequence fails. This is one reason why the New Millennium program has an objective to migrate parts of mission control tasks onboard a spacecraft to reduce wait time by making spacecraft more robust. The migrated software is called a "remote agent" and has 4 components: a mission manager to generate the high level goals, a planner/scheduler to turn goals into activities while reasoning about future expected situations, an executive/diagnostics engine to initiate and maintain activities while interpreting sensed events by reasoning about past and present situations, and a conventional real-time subsystem to interface with the spacecraft to implement an activity's primitive actions. In addition to needing remote planning and execution for isolated spacecraft, a trend toward multiple-spacecraft missions points to the need for remote distributed planning and execution. The past few years have seen missions with growing numbers of probes. Pathfinder has its rover (Sojourner), Cassini has its lander (Huygens), and the New Millenium Deep Space 3 (DS3) proposal involves a constellation of 3 spacecraft for interferometric mapping. This trend is expected to continue to progressively larger fleets. For example, one mission proposed to succeed DS3 would have 18 spacecraft flying in formation in order to detect earth-sized planets orbiting other stars. A proposed magnetospheric constellation would involve 5 to 500 spacecraft in Earth orbit to measure global phenomena within the magnetosphere. This work describes and compares three autonomy architectures for a system that continuously plans to control a fleet of spacecraft using collective mission goals instead of goals or command sequences for each spacecraft. A fleet of self-commanding spacecraft would autonomously coordinate itself to satisfy high level science and engineering goals in a changing partially-understood environment making feasible the operation of tens or even a hundred spacecraft (such as for interferometry or plasma physics missions). The easiest way to adapt autonomous spacecraft research to controlling constellations involves treating the constellation as a single spacecraft. Here one spacecraft directly controls the others as if they were connected. The controlling "master" spacecraft performs all autonomy reasoning, and the slaves only have real-time subsystems to execute the master's commands and transmit local telemetry/observations. The executive/diagnostics module starts actions and the master's real-time subsystem controls the action either locally or remotely through a slave. While the master/slave approach benefits from conceptual simplicity, it relies on an assumption that the master spacecraft's executive can continuously monitor the slaves' real-time subsystems, and this relies on high-bandwidth highly-reliable communications. Since unintended results occur fairly rarely, one way to relax the bandwidth requirements involves only monitoring unexpected events in spacecraft. Unfortunately, this disables the ability to monitor for unexpected events between spacecraft and leads to a host of coordination problems among the slaves. Also, failures in the communications system can result in losing slaves. The other two architectures improve robustness while reducing communications by progressively distributing more of the other three remote agent components across the constellation. In a teamwork architecture, all spacecraft have executives and real-time subsystems - only the leader has the planner/scheduler and mission manager. Finally, distributing all remote agent components leads to a peer-to-peer approach toward constellation control.

  18. Collecting, Managing, and Visualizing Data during Planetary Surface Exploration

    NASA Astrophysics Data System (ADS)

    Young, K. E.; Graff, T. G.; Bleacher, J. E.; Whelley, P.; Garry, W. B.; Rogers, A. D.; Glotch, T. D.; Coan, D.; Reagan, M.; Evans, C. A.; Garrison, D. H.

    2017-12-01

    While the Apollo lunar surface missions were highly successful in collecting valuable samples to help us understand the history and evolution of the Moon, technological advancements since 1969 point us toward a new generation of planetary surface exploration characterized by large volumes of data being collected and used to inform traverse execution real-time. Specifically, the advent of field portable technologies mean that future planetary explorers will have vast quantities of in situ geochemical and geophysical data that can be used to inform sample collection and curation as well as strategic and tactical decision making that will impact mission planning real-time. The RIS4E SSERVI (Remote, In Situ and Synchrotron Studies for Science and Exploration; Solar System Exploration Research Virtual Institute) team has been working for several years to deploy a variety of in situ instrumentation in relevant analog environments. RIS4E seeks both to determine ideal instrumentation suites for planetary surface exploration as well as to develop a framework for EVA (extravehicular activity) mission planning that incorporates this new generation of technology. Results from the last several field campaigns will be discussed, as will recommendations for how to rapidly mine in situ datasets for tactical and strategic planning. Initial thoughts about autonomy in mining field data will also be presented. The NASA Extreme Environments Mission Operations (NEEMO) missions focus on a combination of Science, Science Operations, and Technology objectives in a planetary analog environment. Recently, the increase of high-fidelity marine science objectives during NEEMO EVAs have led to the ability to evaluate how real-time data collection and visualization can influence tactical and strategic planning for traverse execution and mission planning. Results of the last few NEEMO missions will be discussed in the context of data visualization strategies for real-time operations.

  19. Your alliances are too stable.

    PubMed

    Ernst, David; Bamford, James

    2005-06-01

    A 2004 McKinsey survey of more than 30 companies reveals that at least 70% of them have major alliances that are underperforming and in need of restructuring. Moreover, JVs that broaden or otherwise adjust their scope have a 79% success rate, versus 33% for ventures that remain essentially unchanged. Yet most firms don't routinely evaluate the need to overhaul their alliances or intervene to correct performance problems. That means corporations are missing huge opportunities: By revamping just one large alliance, a company can generate 100 million dololars to 300 million dollars in extra income a year. Here's how to unlock more value from alliances: (1) Launch the process. Don't wait until your venture is in the middle of a crisis; regularly scan your major alliances to determine which need restructuring. Once you've targeted one, designate a restructuring team and find a senior sponsor to push the process along. Then delineate the scope of the team's work. (2) Diagnose performance. Evaluate the venture on the following performance dimensions: ownership and financials, strategy, operations, governance, and organization and talent. Identify the root causes of the venture's problems, not just the symptoms, and estimate how much each problem is costing the company. (3) Generate restructuring options. Based on the diagnosis, decide whether to fix, grow, or exit the alliance. Assuming the answer is fix or grow, determine whether fundamental or incremental changes are needed, using the five performance dimensions above as a framework. Then assemble three or four packages of restructuring options, test them with shareholders, and gain parents' approval. (4) Execute the changes. Embark on a widespread and consistent communication effort, building support among executives in the JV and the parent companies. So the process stays on track, assign accountability to certain groups or individuals.

  20. It's All About the Data: Workflow Systems and Weather

    NASA Astrophysics Data System (ADS)

    Plale, B.

    2009-05-01

    Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.

  1. Individual Differences In The Executive Control Of Attention, Memory, And Thought, And Their Associations With Schizotypy

    PubMed Central

    Kane, Michael J.; Meier, Matt E.; Smeekens, Bridget A.; Gross, Georgina M.; Chun, Charlotte A.; Silvia, Paul J.; Kwapil, Thomas R.

    2016-01-01

    A large correlational study took a latent-variable approach to the generality of executive control by testing the individual-differences structure of executive-attention capabilities and assessing their prediction of schizotypy, a multidimensional construct (with negative, positive, disorganized, and paranoid factors) conveying risk for schizophrenia. Although schizophrenia is convincingly linked to executive deficits, the schizotypy literature is equivocal. Subjects completed tasks of working memory capacity (WMC), attention restraint (inhibiting prepotent responses), and attention constraint (focusing visual attention amid distractors), the latter two in an effort to fractionate the “inhibition” construct. We also assessed mind-wandering propensity (via in-task thought probes) and coefficient of variation in response times (RT CoV) from several tasks as more novel indices of executive attention. WMC, attention restraint, attention constraint, mind wandering, and RT CoV were correlated but separable constructs, indicating some distinctions among “attention control” abilities; WMC correlated more strongly with attentional restraint than constraint, and mind wandering correlated more strongly with attentional restraint, attentional constraint, and RT CoV than with WMC. Across structural models, no executive construct predicted negative schizotypy and only mind wandering and RT CoV consistently (but modestly) predicted positive, disorganized, and paranoid schizotypy; stalwart executive constructs in the schizophrenia literature — WMC and attention restraint — showed little to no predictive power, beyond restraint’s prediction of paranoia. Either executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms. PMID:27454042

  2. The roles of associative and executive processes in creative cognition.

    PubMed

    Beaty, Roger E; Silvia, Paul J; Nusbaum, Emily C; Jauk, Emanuel; Benedek, Mathias

    2014-10-01

    How does the mind produce creative ideas? Past research has pointed to important roles of both executive and associative processes in creative cognition. But such work has largely focused on the influence of one ability or the other-executive or associative-so the extent to which both abilities may jointly affect creative thought remains unclear. Using multivariate structural equation modeling, we conducted two studies to determine the relative influences of executive and associative processes in domain-general creative cognition (i.e., divergent thinking). Participants completed a series of verbal fluency tasks, and their responses were analyzed by means of latent semantic analysis (LSA) and scored for semantic distance as a measure of associative ability. Participants also completed several measures of executive function-including broad retrieval ability (Gr) and fluid intelligence (Gf). Across both studies, we found substantial effects of both associative and executive abilities: As the average semantic distance between verbal fluency responses and cues increased, so did the creative quality of divergent-thinking responses (Study 1 and Study 2). Moreover, the creative quality of divergent-thinking responses was predicted by the executive variables-Gr (Study 1) and Gf (Study 2). Importantly, the effects of semantic distance and the executive function variables remained robust in the same structural equation model predicting divergent thinking, suggesting unique contributions of both constructs. The present research extends recent applications of LSA in creativity research and provides support for the notion that both associative and executive processes underlie the production of novel ideas.

  3. A PET Study of Word Generation in Huntington's Disease: Effects of Lexical Competition and Verb/Noun Category

    ERIC Educational Resources Information Center

    Lepron, Evelyne; Peran, Patrice; Cardebat, Dominique; Demonet, Jean-Francois

    2009-01-01

    Huntington's disease (HD) patients show language production deficits that have been conceptualized as a consequence of executive disorders, e.g. selection deficit between candidate words or switching between word categories. More recently, a deficit of word generation specific to verbs has been reported, which might relate to impaired action…

  4. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine

    PubMed Central

    Peterson, Kevin J.; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459

  5. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  6. Less-structured time in children's daily lives predicts self-directed executive functioning.

    PubMed

    Barker, Jane E; Semenov, Andrei D; Michaelson, Laura; Provan, Lindsay S; Snyder, Hannah R; Munakata, Yuko

    2014-01-01

    Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6-7 year-old children's daily, annual, and typical schedules. We categorized children's activities as "structured" or "less-structured" based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up.

  7. Less-structured time in children's daily lives predicts self-directed executive functioning

    PubMed Central

    Barker, Jane E.; Semenov, Andrei D.; Michaelson, Laura; Provan, Lindsay S.; Snyder, Hannah R.; Munakata, Yuko

    2014-01-01

    Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6–7 year-old children's daily, annual, and typical schedules. We categorized children's activities as “structured” or “less-structured” based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up. PMID:25071617

  8. KNIME4NGS: a comprehensive toolbox for next generation sequencing analysis.

    PubMed

    Hastreiter, Maximilian; Jeske, Tim; Hoser, Jonathan; Kluge, Michael; Ahomaa, Kaarin; Friedl, Marie-Sophie; Kopetzky, Sebastian J; Quell, Jan-Dominik; Mewes, H Werner; Küffner, Robert

    2017-05-15

    Analysis of Next Generation Sequencing (NGS) data requires the processing of large datasets by chaining various tools with complex input and output formats. In order to automate data analysis, we propose to standardize NGS tasks into modular workflows. This simplifies reliable handling and processing of NGS data, and corresponding solutions become substantially more reproducible and easier to maintain. Here, we present a documented, linux-based, toolbox of 42 processing modules that are combined to construct workflows facilitating a variety of tasks such as DNAseq and RNAseq analysis. We also describe important technical extensions. The high throughput executor (HTE) helps to increase the reliability and to reduce manual interventions when processing complex datasets. We also provide a dedicated binary manager that assists users in obtaining the modules' executables and keeping them up to date. As basis for this actively developed toolbox we use the workflow management software KNIME. See http://ibisngs.github.io/knime4ngs for nodes and user manual (GPLv3 license). robert.kueffner@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.

  9. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

    NASA Astrophysics Data System (ADS)

    Grossman, S.

    2015-05-01

    Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

  10. WEST-3 wind turbine simulator development

    NASA Technical Reports Server (NTRS)

    Hoffman, J. A.; Sridhar, S.

    1985-01-01

    The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  11. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  12. Real-Life Impact of Executive Function Impairments in Adults Who Were Born Very Preterm.

    PubMed

    Kroll, Jasmin; Karolis, Vyacheslav; Brittain, Philip J; Tseng, Chieh-En Jane; Froudist-Walsh, Sean; Murray, Robin M; Nosarti, Chiara

    2017-05-01

    Children and adolescents who were born very preterm (≤32 weeks' gestation) are vulnerable to experiencing cognitive problems, including in executive function. However, it remains to be established whether cognitive deficits are evident in adulthood and whether these exert a significant effect on an individual's real-lifeachievement. Using a cross-sectional design, we tested a range of neurocognitive abilities, with a focus on executive function, in a sample of 122 very preterm individuals and 89 term-born controls born between 1979 and 1984. Associations between executive function and a range of achievement measures, indicative of a successful transition to adulthood, were examined. Very preterm adults performed worse compared to controls on measures of intellectual ability and executive function with moderate to large effect sizes. They also demonstrated significantly lower achievement levels in terms of years spent in education, employment status, and on a measure of functioning in work and social domains. Results of regression analysis indicated a stronger positive association between executive function and real-life achievement in the very preterm group compared to controls. Very preterm born adults demonstrate executive function impairments compared to full-term controls, and these are associated with lower achievement in several real-life domains. (JINS, 2017, 23, 381-389).

  13. Executive functioning of complicated-mild to moderate traumatic brain injury patients with frontal contusions.

    PubMed

    Ghawami, Heshmatollah; Sadeghi, Sadegh; Raghibi, Mahvash; Rahimi-Movaghar, Vafa

    2017-01-01

    Executive dysfunctions are among the most prevalent neurobehavioral sequelae of traumatic brain injuries (TBIs). Using culturally validated tests from the Delis-Kaplan Executive Function System (D-KEFS: Trail Making, Verbal Fluency, Design Fluency, Sorting, Twenty Questions, and Tower) and the Behavioural Assessment of the Dysexecutive Syndrome (BADS: Rule Shift Cards, Key Search, and Modified Six Elements), the current study was the first to examine executive functioning in a group of Iranian TBI patients with focal frontal contusions. Compared with a demographically matched normative sample, the frontal contusion patients showed substantial impairments, with very large effect sizes (p ≤ .003, 1.56 < d < 3.12), on all the executive measures. Controlling for respective lower-level/fundamental conditions, the differences on the highest-level executive (cognitive switching) conditions were still significant. The frontal patients also committed more errors. Patients with lateral prefrontal (LPFC) contusions were qualitatively worst. For example, only the LPFC patients committed perseverative repetition errors. Altogether, our results support the notion that the frontal lobes, specifically the lateral prefrontal regions, play a critical role in cognitive executive functioning, over and above the contributions of respective lower-level cognitive abilities. The results provide clinical evidence for validity of the cross-culturally adapted versions of the tests.

  14. Regional frontal gray matter volume associated with executive function capacity as a risk factor for vehicle crashes in normal aging adults.

    PubMed

    Sakai, Hiroyuki; Takahara, Miwa; Honjo, Naomi F; Doi, Shun'ichi; Sadato, Norihiro; Uchiyama, Yuji

    2012-01-01

    Although low executive functioning is a risk factor for vehicle crashes among elderly drivers, the neural basis of individual differences in this cognitive ability remains largely unknown. Here we aimed to examine regional frontal gray matter volume associated with executive functioning in normal aging individuals, using voxel-based morphometry (VBM). To this end, 39 community-dwelling elderly volunteers who drove a car on a daily basis participated in structural magnetic resonance imaging, and completed two questionnaires concerning executive functioning and risky driving tendencies in daily living. Consequently, we found that participants with low executive function capacity were prone to risky driving. Furthermore, VBM analysis revealed that lower executive function capacity was associated with smaller gray matter volume in the supplementary motor area (SMA). Thus, the current data suggest that SMA volume is a reliable predictor of individual differences in executive function capacity as a risk factor for vehicle crashes among elderly persons. The implication of our results is that regional frontal gray matter volume might underlie the variation in driving tendencies among elderly drivers. Therefore, detailed driving behavior assessments might be able to detect early neurodegenerative changes in the frontal lobe in normal aging adults.

  15. Double Dissociation in the Anatomy of Socioemotional Disinhibition and Executive Functioning in Dementia

    PubMed Central

    Krueger, Casey E.; Laluz, Victor; Rosen, Howard J.; Neuhaus, John M.; Miller, Bruce L.; Kramer, Joel H.

    2010-01-01

    Objective To determine if socioemotional disinhibition and executive dysfunction are related to dissociable patterns of brain atrophy in neurodegenerative disease. Previous studies have indicated that behavioral and cognitive dysfunction in neurodegenerative disease are linked to atrophy in different parts of the frontal lobe, but these prior studies did not establish that these relationships were specific, which would best be demonstrated by a double dissociation. Method Subjects included 157 patients with neurodegenerative disease. A semi-automated parcellation program (Freesurfer) was used to generate regional cortical volumes from structural MRI scans. Regions of interest (ROIs) included anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), middle frontal gyrus (MFG) and inferior frontal gyrus (IFG). Socioemotional disinhibition was measured using the Neuropsychiatric Inventory. Principal component analysis including three tasks of executive function (EF; verbal fluency, Stroop Interference, modified Trails) was used to generate a single factor score to represent EF. Results Partial correlations between ROIs, disinhibition, and EF were computed after controlling for total intracranial volume, MMSE, diagnosis, age, and education. Brain regions significantly correlated with disinhibition (ACC, OFC, IFG, and temporal lobes) and EF (MFG) were entered into separate hierarchical regressions to determine which brain regions predicted disinhibition and EF. OFC was the only brain region to significantly predict disinhibition and MFG significantly predicted executive functioning performance. A multivariate general linear model demonstrated a significant interaction between ROIs and cognitive-behavioral functions. Conclusions These results support a specific association between orbitofrontal areas and behavioral management as compared to dorsolateral areas and EF. PMID:21381829

  16. Future Extragalactic Surveys

    NASA Astrophysics Data System (ADS)

    Blain, Andrew

    2007-12-01

    The technology for mega-pixel mm/submm-wave cameras is being developed, and 10,000-pixel cameras are close to being deployed. These parameters correspond to degree-sized fields, and challenge the optical performance of current telescopes. Next-generation cameras will enable a survey of a large fraction of the sky, to detect active and star-forming dust-enshrouded galaxies. However, to avoid being limited by confusion, and finding only `monsters' it is necessary to push to large telescopes and short wavelengths. The CCAT project will enable the necessary performance to survey the sky to detect ultraluminous galaxies at z>2, each of which can then be imaged in detail with ALMA. The combination of image quality, collecting area and field-of-view will also enable CCAT to probe much deeper, to detect all the sources in legacy fields from the Spitzer and Herschel Space Telescopes. Unlike ALMA, CCAT will still be limited to detecting `normal' galaxies at z 3-5; however, by generating huge catalogs, CCAT will enable a dramatic increase in ALMA's efficiency, and almost completely remove the need for ALMA to conduct its own imaging survey. I will discuss the nature of galaxy surveys that will be enabled by CCAT, the issues of prioritizing and executing follow-up imaging spectroscopy with ALMA, and the links with the forthcoming NASA WISE mission, and future space-based far-infrared missions.

  17. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  18. Video game practice optimizes executive control skills in dual-task and task switching situations.

    PubMed

    Strobach, Tilo; Frensch, Peter A; Schubert, Torsten

    2012-05-01

    We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Effects of Mild Cognitive Impairment on the Event-Related Brain Potential Components Elicited in Executive Control Tasks.

    PubMed

    Zurrón, Montserrat; Lindín, Mónica; Cespón, Jesús; Cid-Fernández, Susana; Galdo-Álvarez, Santiago; Ramos-Goicoa, Marta; Díaz, Fernando

    2018-01-01

    We summarize here the findings of several studies in which we analyzed the event-related brain potentials (ERPs) elicited in participants with mild cognitive impairment (MCI) and in healthy controls during performance of executive tasks. The objective of these studies was to investigate the neural functioning associated with executive processes in MCI. With this aim, we recorded the brain electrical activity generated in response to stimuli in three executive control tasks (Stroop, Simon, and Go/NoGo) adapted for use with the ERP technique. We found that the latencies of the ERP components associated with the evaluation and categorization of the stimuli were longer in participants with amnestic MCI than in the paired controls, particularly those with multiple-domain amnestic MCI, and that the allocation of neural resources for attending to the stimuli was weaker in participants with amnestic MCI. The MCI participants also showed deficient functioning of the response selection and preparation processes demanded by each task.

  20. Effects of Mild Cognitive Impairment on the Event-Related Brain Potential Components Elicited in Executive Control Tasks

    PubMed Central

    Zurrón, Montserrat; Lindín, Mónica; Cespón, Jesús; Cid-Fernández, Susana; Galdo-Álvarez, Santiago; Ramos-Goicoa, Marta; Díaz, Fernando

    2018-01-01

    We summarize here the findings of several studies in which we analyzed the event-related brain potentials (ERPs) elicited in participants with mild cognitive impairment (MCI) and in healthy controls during performance of executive tasks. The objective of these studies was to investigate the neural functioning associated with executive processes in MCI. With this aim, we recorded the brain electrical activity generated in response to stimuli in three executive control tasks (Stroop, Simon, and Go/NoGo) adapted for use with the ERP technique. We found that the latencies of the ERP components associated with the evaluation and categorization of the stimuli were longer in participants with amnestic MCI than in the paired controls, particularly those with multiple-domain amnestic MCI, and that the allocation of neural resources for attending to the stimuli was weaker in participants with amnestic MCI. The MCI participants also showed deficient functioning of the response selection and preparation processes demanded by each task.

  1. Computerized Cognitive Rehabilitation of Attention and Executive Function in Acquired Brain Injury: A Systematic Review.

    PubMed

    Bogdanova, Yelena; Yee, Megan K; Ho, Vivian T; Cicerone, Keith D

    Comprehensive review of the use of computerized treatment as a rehabilitation tool for attention and executive function in adults (aged 18 years or older) who suffered an acquired brain injury. Systematic review of empirical research. Two reviewers independently assessed articles using the methodological quality criteria of Cicerone et al. Data extracted included sample size, diagnosis, intervention information, treatment schedule, assessment methods, and outcome measures. A literature review (PubMed, EMBASE, Ovid, Cochrane, PsychINFO, CINAHL) generated a total of 4931 publications. Twenty-eight studies using computerized cognitive interventions targeting attention and executive functions were included in this review. In 23 studies, significant improvements in attention and executive function subsequent to training were reported; in the remaining 5, promising trends were observed. Preliminary evidence suggests improvements in cognitive function following computerized rehabilitation for acquired brain injury populations including traumatic brain injury and stroke. Further studies are needed to address methodological issues (eg, small sample size, inadequate control groups) and to inform development of guidelines and standardized protocols.

  2. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  3. International Environmental Evaluation for the Helical Screw Expander Generator Unit Projects in Cesano, Italy and Broadlands, New Zealand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb, J.W.; Mezga, L.J.; Reed, A.W.

    1981-08-01

    The objectives of the Helical Screw Expander (HSE) Generator Program are (1) to accelerate the development of geothermal resources by introducing this advanced conversion technology, (2) to provide operating experience to prospective users of the equipment, and (3) to collect data on the performance and reliability of the equipment under various geothermal resource conditions. The participants hope to achieve these goals by testing a small-scale, transportable HSE generator at existing geothermal test facilities that produce fluids of different salinity, temperature and pressure conditions. This Environmental Evaluation has been prepared, using available information, to analyze the environmental consequences of testing themore » HSE generator. Its purpose is to support a decision on the need for a complete environmental review of the HSE program under the terms of Executive Order 121 14, ''Environmental Effects Abroad of Major federal Actions''. This Executive Order requires review of projects which involve the release of potentially toxic effluents that are strictly regulated in the United States, or which may have significant environmental effects on the global commons, on natural or ecological resources of international significance, or on the environment of non-participating countries. The final guidelines implementing the provisions of the Executive Order for DOE have been published. This evaluation deals with testing to be conducted at Cesano, Italy by the designated contractor of the Italian government, the Ente Narionale per l'Energia Ellectrica (ENEL), and at Broadlands, New Zealand by the Ministry of Works and Development of New Zealand. Testing at Cerro Prieto, Mexico has already been completed by the Comision Federal de Electricidad and is not evaluated in this report.« less

  4. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.

  5. Micro-motors: A motile bacteria based system for liposome cargo transport.

    PubMed

    Dogra, Navneet; Izadi, Hadi; Vanderlick, T Kyle

    2016-07-05

    Biological micro-motors (microorganisms) have potential applications in energy utilization and nanotechnology. However, harnessing the power generated by such motors to execute desired work is extremely difficult. Here, we employ the power of motile bacteria to transport small, large, and giant unilamellar vesicles (SUVs, LUVs, and GUVs). Furthermore, we demonstrate bacteria-bilayer interactions by probing glycolipids inside the model membrane scaffold. Fluorescence Resonance Energy Transfer (FRET) spectroscopic and microscopic methods were utilized for understanding these interactions. We found that motile bacteria could successfully propel SUVs and LUVs with a velocity of 28 μm s(-1) and 13 μm s(-1), respectively. GUVs, however, displayed Brownian motion and could not be propelled by attached bacteria. Bacterial velocity decreased with the larger loaded cargo, which agrees with our calculations of loaded bacteria swimming at low Reynolds number.

  6. Applying knowledge-anchored hypothesis discovery methods to advance clinical and translational research: the OAMiner project

    PubMed Central

    Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N

    2012-01-01

    The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689

  7. Data Publications Correlate with Citation Impact.

    PubMed

    Leitner, Florian; Bielza, Concha; Hill, Sean L; Larrañaga, Pedro

    2016-01-01

    Neuroscience and molecular biology have been generating large datasets over the past years that are reshaping how research is being conducted. In their wake, open data sharing has been singled out as a major challenge for the future of research. We conducted a comparative study of citations of data publications in both fields, showing that the average publication tagged with a data-related term by the NCBI MeSH (Medical Subject Headings) curators achieves a significantly larger citation impact than the average in either field. We introduce a new metric, the data article citation index (DAC-index), to identify the most prolific authors among those data-related publications. The study is fully reproducible from an executable Rmd (R Markdown) script together with all the citation datasets. We hope these results can encourage authors to more openly publish their data.

  8. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  10. Improve Performance of Data Warehouse by Query Cache

    NASA Astrophysics Data System (ADS)

    Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod

    2010-11-01

    The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.

  11. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  12. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  13. Understanding metaphors and idioms: a single-case neuropsychological study in a person with Down syndrome.

    PubMed

    Papagno, C; Vallar, G

    2001-05-01

    The ability of subject F.F., diagnosed with Down syndrome, to appreciate nonliteral (interpreting metaphors and idioms) and literal (vocabulary knowledge, including highly specific and unusual items) aspects of language was investigated. F.F. was impaired in understanding both metaphors and idioms, while her phonological, syntactic and lexical-semantic skills were largely preserved. By contrast, some aspects of F.F.'s executive functions and many visuospatial abilities were defective. The suggestion is made that the interpretation of metaphors and idioms is largely independent of that of literal language, preserved in F.F., and that some executive aspects of working memory and visuospatial and imagery processes may play a role.

  14. Large Deployable Reflector (LDR) system concept and technology definition study. Volume 1: Executive summary, analyses and trades, and system concepts

    NASA Technical Reports Server (NTRS)

    Agnew, Donald L.; Jones, Peter A.

    1989-01-01

    A study was conducted to define reasonable and representative large deployable reflector (LDR) system concepts for the purpose of defining a technology development program aimed at providing the requisite technological capability necessary to start LDR development by the end of 1991. This volume includes the executive summary for the total study, a report of thirteen system analysis and trades tasks (optical configuration, aperture size, reflector material, segmented mirror, optical subsystem, thermal, pointing and control, transportation to orbit, structures, contamination control, orbital parameters, orbital environment, and spacecraft functions), and descriptions of three selected LDR system concepts. Supporting information is contained in appendices.

  15. Contributions from associative and explicit sequence knowledge to the execution of discrete keying sequences.

    PubMed

    Verwey, Willem B

    2015-05-01

    Research has provided many indications that highly practiced 6-key sequences are carried out in a chunking mode in which key-specific stimuli past the first are largely ignored. When in such sequences a deviating stimulus occasionally occurs at an unpredictable location, participants fall back to responding to individual stimuli (Verwey & Abrahamse, 2012). The observation that in such a situation execution still benefits from prior practice has been attributed to the possibility to operate in an associative mode. To better understand the contribution to the execution of keying sequences of motor chunks, associative sequence knowledge and also of explicit sequence knowledge, the present study tested three alternative accounts for the earlier finding of an execution rate increase at the end of 6-key sequences performed in the associative mode. The results provide evidence that the earlier observed execution rate increase can be attributed to the use of explicit sequence knowledge. In the present experiment this benefit was limited to sequences that are executed at the moderately fast rates of the associative mode, and occurred at both the earlier and final elements of the sequences. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Expert-novice differences in cognitive and execution skills during tennis competition.

    PubMed

    Del Villar, Fernando; García González, Luis; Iglesias, Damián; Perla Moreno, M; Cervelló, Eduardo M

    2007-04-01

    This study deals with decision and execution behavior of tennis players during competition. The study is based on the expert-novice paradigm and aims to identify differences between both groups in the decision-making and execution variables in serve and shot actions in tennis. Six expert players (elite Spanish tennis players) and six novice players (grade school tennis players) took part in this study. To carry out this study, the observation protocol defined by McPherson and Thomas in 1989, in which control, decision-making and execution variables were included, was used, where it was applied to the performance of the tennis player in a real match situation. In the analysis, significant differences between experts and novices in decision-making and execution variables are found wherein it can be observed that experts display a greater ability to make the appropriate decisions, selecting the most tactical responses to put pressure on the opponent. Expert tennis players were also able to carry out forceful executions to their opponent with greater efficiency, making the opponent's response to a large extent more difficult. These findings are in accordance with those of McPherson and colleagues.

  17. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  18. Software Testbed for Developing and Evaluating Integrated Autonomous Systems

    DTIC Science & Technology

    2015-03-01

    EUROPA planning system for plan generation. The adaptive controller executes the new plan, using augmented, hierarchical finite state machines to...using the Internet Communications Engine ( ICE ), an object-oriented toolkit for building distributed applications. TABLE OF CONTENTS 1...ANML model is translated into the New Domain Definition Language (NDDL) and sent to NASA???s EUROPA planning system for plan generation. The adaptive

  19. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  20. Combustion driven ammonia generation strategies for passive ammonia SCR system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toner, Joel G.; Narayanaswamy, Kushal; Szekely, Jr., Gerald A.

    A method for controlling ammonia generation in an exhaust gas feedstream output from an internal combustion engine equipped with an exhaust aftertreatment system including a first aftertreatment device includes executing an ammonia generation cycle to generate ammonia on the first aftertreatment device. A desired air-fuel ratio output from the engine and entering the exhaust aftertreatment system conducive for generating ammonia on the first aftertreatment device is determined. Operation of a selected combination of a plurality of cylinders of the engine is selectively altered to achieve the desired air-fuel ratio entering the exhaust aftertreatment system.

  1. A user-oriented synthetic workload generator

    NASA Technical Reports Server (NTRS)

    Kao, Wei-Lun

    1991-01-01

    A user oriented synthetic workload generator that simulates users' file access behavior based on real workload characterization is described. The model for this workload generator is user oriented and job specific, represents file I/O operations at the system call level, allows general distributions for the usage measures, and assumes independence in the file I/O operation stream. The workload generator consists of three parts which handle specification of distributions, creation of an initial file system, and selection and execution of file I/O operations. Experiments on SUN NFS are shown to demonstrate the usage of the workload generator.

  2. Sound representation in higher language areas during language generation

    PubMed Central

    Magrassi, Lorenzo; Aromataris, Giuseppe; Cabrini, Alessandro; Annovazzi-Lodi, Valerio; Moro, Andrea

    2015-01-01

    How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca’s area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca’s area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca’s area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard. PMID:25624479

  3. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  4. Individual differences in the executive control of attention, memory, and thought, and their associations with schizotypy.

    PubMed

    Kane, Michael J; Meier, Matt E; Smeekens, Bridget A; Gross, Georgina M; Chun, Charlotte A; Silvia, Paul J; Kwapil, Thomas R

    2016-08-01

    A large correlational study took a latent-variable approach to the generality of executive control by testing the individual-differences structure of executive-attention capabilities and assessing their prediction of schizotypy, a multidimensional construct (with negative, positive, disorganized, and paranoid factors) conveying risk for schizophrenia. Although schizophrenia is convincingly linked to executive deficits, the schizotypy literature is equivocal. Subjects completed tasks of working memory capacity (WMC), attention restraint (inhibiting prepotent responses), and attention constraint (focusing visual attention amid distractors), the latter 2 in an effort to fractionate the "inhibition" construct. We also assessed mind-wandering propensity (via in-task thought probes) and coefficient of variation in response times (RT CoV) from several tasks as more novel indices of executive attention. WMC, attention restraint, attention constraint, mind wandering, and RT CoV were correlated but separable constructs, indicating some distinctions among "attention control" abilities; WMC correlated more strongly with attentional restraint than constraint, and mind wandering correlated more strongly with attentional restraint, attentional constraint, and RT CoV than with WMC. Across structural models, no executive construct predicted negative schizotypy and only mind wandering and RT CoV consistently (but modestly) predicted positive, disorganized, and paranoid schizotypy; stalwart executive constructs in the schizophrenia literature-WMC and attention restraint-showed little to no predictive power, beyond restraint's prediction of paranoia. Either executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  6. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  7. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  8. Application of neural networks to group technology

    NASA Astrophysics Data System (ADS)

    Caudell, Thomas P.; Smith, Scott D. G.; Johnson, G. C.; Wunsch, Donald C., II

    1991-08-01

    Adaptive resonance theory (ART) neural networks are being developed for application to the industrial engineering problem of group technology--the reuse of engineering designs. Two- and three-dimensional representations of engineering designs are input to ART-1 neural networks to produce groups or families of similar parts. These representations, in their basic form, amount to bit maps of the part, and can become very large when the part is represented in high resolution. This paper describes an enhancement to an algorithmic form of ART-1 that allows it to operate directly on compressed input representations and to generate compressed memory templates. The performance of this compressed algorithm is compared to that of the regular algorithm on real engineering designs and a significant savings in memory storage as well as a speed up in execution is observed. In additions, a `neural database'' system under development is described. This system demonstrates the feasibility of training an ART-1 network to first cluster designs into families, and then to recall the family when presented a similar design. This application is of large practical value to industry, making it possible to avoid duplication of design efforts.

  9. MLP: A Parallel Programming Alternative to MPI for New Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    1999-01-01

    Recent developments at the NASA AMES Research Center's NAS Division have demonstrated that the new generation of NUMA based Symmetric Multi-Processing systems (SMPs), such as the Silicon Graphics Origin 2000, can successfully execute legacy vector oriented CFD production codes at sustained rates far exceeding processing rates possible on dedicated 16 CPU Cray C90 systems. This high level of performance is achieved via shared memory based Multi-Level Parallelism (MLP). This programming approach, developed at NAS and outlined below, is distinct from the message passing paradigm of MPI. It offers parallelism at both the fine and coarse grained level, with communication latencies that are approximately 50-100 times lower than typical MPI implementations on the same platform. Such latency reductions offer the promise of performance scaling to very large CPU counts. The method draws on, but is also distinct from, the newly defined OpenMP specification, which uses compiler directives to support a limited subset of multi-level parallel operations. The NAS MLP method is general, and applicable to a large class of NASA CFD codes.

  10. GPU Accelerated Clustering for Arbitrary Shapes in Geoscience Data

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Gowanlock, M.; Rude, C. M.; Li, J. D.

    2016-12-01

    Clustering algorithms have become a vital component in intelligent systems for geoscience that helps scientists discover and track phenomena of various kinds. Here, we outline advances in Density-Based Spatial Clustering of Applications with Noise (DBSCAN) which detects clusters of arbitrary shape that are common in geospatial data. In particular, we propose a hybrid CPU-GPU implementation of DBSCAN and highlight new optimization approaches on the GPU that allows clustering detection in parallel while optimizing data transport during CPU-GPU interactions. We employ an efficient batching scheme between the host and GPU such that limited GPU memory is not prohibitive when processing large and/or dense datasets. To minimize data transfer overhead, we estimate the total workload size and employ an execution that generates optimized batches that will not overflow the GPU buffer. This work is demonstrated on space weather Total Electron Content (TEC) datasets containing over 5 million measurements from instruments worldwide, and allows scientists to spot spatially coherent phenomena with ease. Our approach is up to 30 times faster than a sequential implementation and therefore accelerates discoveries in large datasets. We acknowledge support from NSF ACI-1442997.

  11. Developing a validation for environmental sustainability

    NASA Astrophysics Data System (ADS)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nawi, Mohd Nasrun Mohd; Aziz, Zulkifli

    2016-08-01

    One of the agendas for addressing environmental protection in construction is to reduce impacts and make the construction activities more sustainable. This important consideration has generated several research interests within the construction industry, especially considering the construction damaging effects on the ecosystem, such as various forms of environmental pollution, resource depletion and biodiversity loss on a global scale. Using Partial Least Squares-Structural Equation Modeling technique, this study validates environmental sustainability (ES) construct in the context of large construction firms in Malaysia. A cross-sectional survey was carried out where data was collected from Malaysian large construction firms using a structured questionnaire. Results of this study revealed that business innovativeness and new technology are important in determining environmental sustainability (ES) of the Malaysian construction firms. It also established an adequate level of internal consistency reliability, convergent validity and discriminant validity for each of this study's constructs. And based on this result, it could be suggested that the indicators for organisational innovativeness dimensions (business innovativeness and new technology) are useful to measure these constructs in order to study construction firms' tendency to adopt environmental sustainability (ES) in their project execution.

  12. SAMSA2: a standalone metatranscriptome analysis pipeline.

    PubMed

    Westreich, Samuel T; Treiber, Michelle L; Mills, David A; Korf, Ian; Lemay, Danielle G

    2018-05-21

    Complex microbial communities are an area of growing interest in biology. Metatranscriptomics allows researchers to quantify microbial gene expression in an environmental sample via high-throughput sequencing. Metatranscriptomic experiments are computationally intensive because the experiments generate a large volume of sequence data and each sequence must be compared with reference sequences from thousands of organisms. SAMSA2 is an upgrade to the original Simple Annotation of Metatranscriptomes by Sequence Analysis (SAMSA) pipeline that has been redesigned for standalone use on a supercomputing cluster. SAMSA2 is faster due to the use of the DIAMOND aligner, and more flexible and reproducible because it uses local databases. SAMSA2 is available with detailed documentation, and example input and output files along with examples of master scripts for full pipeline execution. SAMSA2 is a rapid and efficient metatranscriptome pipeline for analyzing large RNA-seq datasets in a supercomputing cluster environment. SAMSA2 provides simplified output that can be examined directly or used for further analyses, and its reference databases may be upgraded, altered or customized to fit the needs of any experiment.

  13. Executive Semantic Processing Is Underpinned by a Large-scale Neural Network: Revealing the Contribution of Left Prefrontal, Posterior Temporal, and Parietal Cortex to Controlled Retrieval and Selection Using TMS

    ERIC Educational Resources Information Center

    Whitney, Carin; Kirk, Marie; O'Sullivan, Jamie; Ralph, Matthew A. Lambon; Jefferies, Elizabeth

    2012-01-01

    To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus…

  14. The Modeling, Simulation and Comparison of Interconnection Networks for Parallel Processing.

    DTIC Science & Technology

    1987-12-01

    performs better at a lower hardware cost than do the single stage cube and mesh networks. As a result, the designer of a paralll pro- cessing system is...attempted, and in most cases succeeded, in designing and implementing faster. more powerful systems. Due to design innovations and technological advances...largely to the computational complexity of the algorithms executed. In the von Neumann machine, instructions must be executed in a sequential manner. Design

  15. Systems definition study for shuttle demonstration flights of large space structures. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of large space structure technology is discussed, with emphasis on space fabricated structures which are automatically manufactured in space from sheet-strip materials and assembled on-orbit. Definition of a flight demonstration involving an Automated Beam Builder and the building and assembling of large structures is presented.

  16. 77 FR 28533 - Special Conditions: Boeing, Model 737-800; Large Non-Structural Glass in the Passenger Compartment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ...-0499; Notice No. 25-12-01-SC] Special Conditions: Boeing, Model 737-800; Large Non-Structural Glass in... associated with the installation of large non-structural glass items in the cabin area of an executive... standards that the Administrator considers necessary to establish a level of safety equivalent to that...

  17. 77 FR 40255 - Special Conditions: Boeing, Model 737-800; Large Non-Structural Glass in the Passenger Compartment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ...-0499; Special Conditions No. 25-466-SC] Special Conditions: Boeing, Model 737-800; Large Non-Structural... with the installation of large non-structural glass items in the cabin area of an executive interior... Administrator considers necessary to establish a level of safety equivalent to that established by the existing...

  18. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  19. Impact of the Cognitive-Functional (Cog-Fun) Intervention on Executive Functions and Participation Among Children With Attention Deficit Hyperactivity Disorder: A Randomized Controlled Trial.

    PubMed

    Hahn-Markowitz, Jeri; Berger, Itai; Manor, Iris; Maeir, Adina

    We examined the effect of the Cognitive-Functional (Cog-Fun) occupational therapy intervention on executive functions and participation among children with attention deficit hyperactivity disorder (ADHD). We used a randomized, controlled study with a crossover design. One hundred and seven children age 7-10 yr diagnosed with ADHD were allocated to treatment or wait-list control group. The control group received treatment after a 3-mo wait. Outcome measures included the Behavior Rating Inventory of Executive Function (BRIEF) and the Canadian Occupational Performance Measure (COPM). Significant improvements were found on both the BRIEF and COPM after intervention with large treatment effects. Before crossover, significant Time × Group interactions were found on the BRIEF. This study supports the effectiveness of the Cog-Fun intervention in improving executive functions and participation among children with ADHD. Copyright © 2017 by the American Occupational Therapy Association, Inc.

  20. Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness.

    PubMed

    Hubbard, Nicholas A; Hutchison, Joanna L; Motes, Michael A; Shokri-Kojori, Ehsan; Bennett, Ilana J; Brigante, Ryan M; Haley, Robert W; Rypma, Bart

    2014-05-01

    Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM.

  1. Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness

    PubMed Central

    Hubbard, Nicholas A.; Hutchison, Joanna L.; Motes, Michael A.; Shokri-Kojori, Ehsan; Bennett, Ilana J.; Brigante, Ryan M.; Haley, Robert W.; Rypma, Bart

    2015-01-01

    Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM. PMID:25767746

  2. Optimal execution with price impact under Cumulative Prospect Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jingdong; Zhu, Hongliang; Li, Xindan

    2018-01-01

    Optimal execution of a stock (or portfolio) has been widely studied in academia and in practice over the past decade, and minimizing transaction costs is a critical point. However, few researchers consider the psychological factors for the traders. What are traders truly concerned with - buying low in the paper accounts or buying lower compared to others? We consider the optimal trading strategies in terms of the price impact and Cumulative Prospect Theory and identify some specific properties. Our analyses indicate that a large proportion of the execution volume is distributed at both ends of the transaction time. But the trader's optimal strategies may not be implemented at the same transaction size and speed in different market environments.

  3. Age-related commonalities and differences in the relationship between executive functions and intelligence: Analysis of the NAB executive functions module and WAIS-IV scores.

    PubMed

    Buczylowska, Dorota; Petermann, Franz

    2017-01-01

    Data from five subtests of the Executive Functions Module of the German Neuropsychological Assessment Battery (NAB) and all ten core subtests of the German Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV) were used to examine the relationship between executive functions and intelligence in a comparison of two age groups: individuals aged 18-59 years and individuals aged 60-88 years. The NAB subtests Categories and Word Generation demonstrated a consistent correlation pattern for both age groups. However, the NAB Judgment subtest correlated more strongly with three WAIS-IV indices, the Full Scale IQ (FSIQ), and the General Ability Index (GAI) in the older adult group than in the younger group. Additionally, in the 60-88 age group, the Executive Functions Index (EFI) was more strongly correlated with the Verbal Comprehension Index (VCI) than with the Perceptual Reasoning Index (PRI). Both age groups demonstrated a strong association of the EFI with the FSIQ and the Working Memory Index (WMI). The results imply the potential diagnostic utility of the Judgment subtest and a significant relationship between executive functioning and crystallized intelligence at older ages. Furthermore, it may be concluded that there is a considerable age-independent overlap between the EFI and general intelligence, as well as between the EFI and working memory.

  4. The effects of attentional focus in the preparation and execution of a standing long jump.

    PubMed

    Becker, Kevin A; Fairbrother, Jeffrey T; Couvillion, Kaylee F

    2018-04-03

    Attentional focus research suggests an external focus leads to improved motor performance compared to an internal focus (Wulf in Int Rev Sport Exerc Psychol 6:77-104, 2013), but skilled athletes often report using an internal focus (Fairbrother et al., Front Psychol 7:1028, 2016) and sometimes shifting between different foci in the preparation and execution phases of performance (Bernier et al. in J Appl Sport Psychol 23:326-341, 2011; Bernier et al. in Sport Psychol 30:256-266, 2016). To date, focus shifts have been unexplored in experimental research, thus the purpose of this study was to determine the effect of shifting focus between the preparation and execution phases of a standing long jump. Participants (N = 29) completed two jumps in a control condition (CON), followed by two jumps in four experimental conditions presented in a counterbalanced order. Conditions included using an external focus (EF) and internal focus (IF) in both preparation and execution of the skill, as well as shifting from an internal focus in preparation to an external focus in execution (ITE), and an external focus in preparation to an internal focus in execution (ETI). Jump distance was analyzed with a repeated measures ANOVA. The main effect of condition was significant, p < .001, with EF producing longer jumps than all other conditions (p's < 0.05). ITE also generated farther jumps than IF and CON (p's < 0.05). The superiority of the EF and ITE conditions suggests that the focus employed in execution has the strongest impact on performance. Additionally, if an internal focus must be used in preparation, the performance decrement can be ameliorated by shifting to an external focus during execution.

  5. The influence of cognitive load on spatial search performance.

    PubMed

    Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D

    2014-01-01

    During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.

  6. Do attentional capacities and processing speed mediate the effect of age on executive functioning?

    PubMed

    Gilsoul, Jessica; Simon, Jessica; Hogge, Michaël; Collette, Fabienne

    2018-02-06

    The executive processes are well known to decline with age, and similar data also exists for attentional capacities and processing speed. Therefore, we investigated whether these two last nonexecutive variables would mediate the effect of age on executive functions (inhibition, shifting, updating, and dual-task coordination). We administered a large battery of executive, attentional and processing speed tasks to 104 young and 71 older people, and we performed mediation analyses with variables showing a significant age effect. All executive and processing speed measures showed age-related effects while only the visual scanning task performance (selective attention) was explained by age when controlled for gender and educational level. Regarding mediation analyses, visual scanning partially mediated the age effect on updating while processing speed partially mediated the age effect on shifting, updating and dual-task coordination. In a more exploratory way, inhibition was also found to partially mediate the effect of age on the three other executive functions. Attention did not greatly influence executive functioning in aging while, in agreement with the literature, processing speed seems to be a major mediator of the age effect on these processes. Interestingly, the global pattern of results seems also to indicate an influence of inhibition but further studies are needed to confirm the role of that variable as a mediator and its relative importance by comparison with processing speed.

  7. The development of executive functions and early mathematics: a dynamic relationship.

    PubMed

    Van der Ven, Sanne H G; Kroesbergen, Evelyn H; Boom, Jan; Leseman, Paul P M

    2012-03-01

    The relationship between executive functions and mathematical skills has been studied extensively, but results are inconclusive, and how this relationship evolves longitudinally is largely unknown. The aim was to investigate the factor structure of executive functions in inhibition, shifting, and updating; the longitudinal development of executive functions and mathematics; and the relation between them. A total of 211 children in grade 2 (7-8 years old) from 10 schools in the Netherlands. Children were followed in grade 1 and 2 of primary education. Executive functions and mathematics were measured four times. The test battery contained multiple tasks for each executive function: Animal stroop, local global, and Simon task for inhibition; Animal Shifting, Trail Making Test in Colours, and Sorting Task for shifting; and Digit Span Backwards, Odd One Out, and Keep Track for updating. The factor structure of executive functions was assessed and relations with mathematics were investigated using growth modelling. Confirmatory factor analysis (CFA) showed that inhibition and shifting could not be distinguished from each other. Updating was a separate factor, and its development was strongly related to mathematical development while inhibition and shifting did not predict mathematics in the presence of the updating factor. The strong relationship between updating and mathematics suggest that updating skills play a key role in the maths learning process. This makes updating a promising target for future intervention studies. ©2011 The British Psychological Society.

  8. Poorer divided attention in children born very preterm can be explained by difficulty with each component task, not the executive requirement to dual-task.

    PubMed

    Delane, Louise; Campbell, Catherine; Bayliss, Donna M; Reid, Corinne; Stephens, Amelia; French, Noel; Anderson, Mike

    2017-07-01

    Children born very preterm (VP, ≤ 32 weeks) exhibit poor performance on tasks of executive functioning. However, it is largely unknown whether this reflects the cumulative impact of non-executive deficits or a separable impairment in executive-level abilities. A dual-task paradigm was used in the current study to differentiate the executive processes involved in performing two simple attention tasks simultaneously. The executive-level contribution to performance was indexed by the within-subject cost incurred to single-task performance under dual-task conditions, termed dual-task cost. The participants included 77 VP children (mean age: 7.17 years) and 74 peer controls (mean age: 7.16 years) who completed Sky Search (selective attention), Score (sustained attention) and Sky Search DT (divided attention) from the Test of Everyday Attention for Children. The divided-attention task requires the simultaneous performance of the selective- and sustained-attention tasks. The VP group exhibited poorer performance on the selective- and divided-attention tasks, and showed a strong trend toward poorer performance on the sustained-attention task. However, there were no significant group differences in dual-task cost. These results suggest a cumulative impact of vulnerable lower-level cognitive processes on dual-tasking or divided attention in VP children, and fail to support the hypothesis that VP children show a separable impairment in executive-level abilities.

  9. IGGy: An interactive environment for surface grid generation

    NASA Technical Reports Server (NTRS)

    Prewitt, Nathan C.

    1992-01-01

    A graphically interactive derivative of the EAGLE boundary code is presented. This code allows the user to interactively build and execute commands and immediately see the results. Strong ties with a batch oriented script language are maintained. A generalized treatment of grid definition parameters allows a more generic definition of the grid generation process and allows the generation of command scripts which can be applied to topologically similar configurations. The use of the graphical user interface is outlined and example applications are presented.

  10. A Modular Repository-based Infrastructure for Simulation Model Storage and Execution Support in the Context of In Silico Oncology and In Silico Medicine.

    PubMed

    Christodoulou, Nikolaos A; Tousert, Nikolaos E; Georgiadi, Eleni Ch; Argyri, Katerina D; Misichroni, Fay D; Stamatakos, Georgios S

    2016-01-01

    The plethora of available disease prediction models and the ongoing process of their application into clinical practice - following their clinical validation - have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features.

  11. Different effects of executive and visuospatial working memory on visual consciousness.

    PubMed

    De Loof, Esther; Poppe, Louise; Cleeremans, Axel; Gevers, Wim; Van Opstal, Filip

    2015-11-01

    Consciousness and working memory are two widely studied cognitive phenomena. Although they have been closely tied on a theoretical and neural level, empirical work that investigates their relation is largely lacking. In this study, the relationship between visual consciousness and different working memory components is investigated by using a dual-task paradigm. More specifically, while participants were performing a visual detection task to measure their visual awareness threshold, they had to concurrently perform either an executive or visuospatial working memory task. We hypothesized that visual consciousness would be hindered depending on the type and the size of the load in working memory. Results showed that maintaining visuospatial content in working memory hinders visual awareness, irrespective of the amount of information maintained. By contrast, the detection threshold was progressively affected under increasing executive load. Interestingly, increasing executive load had a generic effect on detection speed, calling into question whether its obstructing effect is specific to the visual awareness threshold. Together, these results indicate that visual consciousness depends differently on executive and visuospatial working memory.

  12. A Modular Repository-based Infrastructure for Simulation Model Storage and Execution Support in the Context of In Silico Oncology and In Silico Medicine

    PubMed Central

    Christodoulou, Nikolaos A.; Tousert, Nikolaos E.; Georgiadi, Eleni Ch.; Argyri, Katerina D.; Misichroni, Fay D.; Stamatakos, Georgios S.

    2016-01-01

    The plethora of available disease prediction models and the ongoing process of their application into clinical practice – following their clinical validation – have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features. PMID:27812280

  13. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  14. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  15. Discrete memory impairments in largely pure chronic users of MDMA.

    PubMed

    Wunderli, Michael D; Vonmoos, Matthias; Fürst, Marina; Schädelin, Katrin; Kraemer, Thomas; Baumgartner, Markus R; Seifritz, Erich; Quednow, Boris B

    2017-10-01

    Chronic use of 3,4-methylenedioxymethamphetamine (MDMA, "ecstasy") has repeatedly been associated with deficits in working memory, declarative memory, and executive functions. However, previous findings regarding working memory and executive function are inconclusive yet, as in most studies concomitant stimulant use, which is known to affect these functions, was not adequately controlled for. Therefore, we compared the cognitive performance of 26 stimulant-free and largely pure (primary) MDMA users, 25 stimulant-using polydrug MDMA users, and 56 MDMA/stimulant-naïve controls by applying a comprehensive neuropsychological test battery. Neuropsychological tests were grouped into four cognitive domains. Recent drug use was objectively quantified by 6-month hair analyses on 17 substances and metabolites. Considerably lower mean hair concentrations of stimulants (amphetamine, methamphetamine, methylphenidate, cocaine), opioids (morphine, methadone, codeine), and hallucinogens (ketamine, 2C-B) were detected in primary compared to polydrug users, while both user groups did not differ in their MDMA hair concentration. Cohen's d effect sizes for both comparisons, i.e., primary MDMA users vs. controls and polydrug MDMA users vs. controls, were highest for declarative memory (d primary =.90, d polydrug =1.21), followed by working memory (d primary =.52, d polydrug =.96), executive functions (d primary =.46, d polydrug =.86), and attention (d primary =.23, d polydrug =.70). Thus, primary MDMA users showed strong and relatively discrete declarative memory impairments, whereas MDMA polydrug users displayed broad and unspecific cognitive impairments. Consequently, even largely pure chronic MDMA use is associated with decreased performance in declarative memory, while additional deficits in working memory and executive functions displayed by polydrug MDMA users are likely driven by stimulant co-use. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.

  16. Elected medical staff leaders: who needs 'em?

    PubMed

    Thompson, R E

    1994-03-01

    Authority, influence, and power are not synonyms. In working with elected medical staff leaders, a physician executive who chooses to exert authority may soon find him- or herself relatively powerless. But one who chooses to downplay authority, to influence through persuasion, and to coach leaders to lead effectively soon generates support for his or her ideas. The need to coax, cajole, explain, persuade, and "seek input" frustrates many leaders in all kinds of organizations. It would be much easier just to order people about. It's so tempting to think: "Who needs 'em? I'm the 'chief physician.' I know what needs to be done. Let's weigh anchor, take her out, and do what it takes to sail those rough, uncharted seas." If you really enjoy sailing a large ship in rough seas without a crew, go right ahead. Or if you think it makes sense to run an organization with only an executive staff and no knowledgeable middle managers, by all means let clinician leaders know that, now that you're aboard, they're just window-dressing. If you can make this approach work, well and good. Your life will be much less complicated, each day will have far fewer frustrations, and progress toward established goals will be much faster. However, given the reality of traditionally thinking physicians, it would be best to keep an up-dated resume in the locked lower left-hand drawer of your desk.

  17. Constructing a Foundational Platform Driven by Japan's K Supercomputer for Next-Generation Drug Design.

    PubMed

    Brown, J B; Nakatsui, Masahiko; Okuno, Yasushi

    2014-12-01

    The cost of pharmaceutical R&D has risen enormously, both worldwide and in Japan. However, Japan faces a particularly difficult situation in that its population is aging rapidly, and the cost of pharmaceutical R&D affects not only the industry but the entire medical system as well. To attempt to reduce costs, the newly launched K supercomputer is available for big data drug discovery and structural simulation-based drug discovery. We have implemented both primary (direct) and secondary (infrastructure, data processing) methods for the two types of drug discovery, custom tailored to maximally use the 88 128 compute nodes/CPUs of K, and evaluated the implementations. We present two types of results. In the first, we executed the virtual screening of nearly 19 billion compound-protein interactions, and calculated the accuracy of predictions against publicly available experimental data. In the second investigation, we implemented a very computationally intensive binding free energy algorithm, and found that comparison of our binding free energies was considerably accurate when validated against another type of publicly available experimental data. The common feature of both result types is the scale at which computations were executed. The frameworks presented in this article provide prospectives and applications that, while tuned to the computing resources available in Japan, are equally applicable to any equivalent large-scale infrastructure provided elsewhere. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. CLIPS: A tool for corn disease diagnostic system and an aid to neural network for automated knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Wu, Cathy; Taylor, Pam; Whitson, George; Smith, Cathy

    1990-01-01

    This paper describes the building of a corn disease diagnostic expert system using CLIPS, and the development of a neural expert system using the fact representation method of CLIPS for automated knowledge acquisition. The CLIPS corn expert system diagnoses 21 diseases from 52 symptoms and signs with certainty factors. CLIPS has several unique features. It allows the facts in rules to be broken down to object-attribute-value (OAV) triples, allows rule-grouping, and fires rules based on pattern-matching. These features combined with the chained inference engine result to a natural user query system and speedy execution. In order to develop a method for automated knowledge acquisition, an Artificial Neural Expert System (ANES) is developed by a direct mapping from the CLIPS system. The ANES corn expert system uses the same OAV triples in the CLIPS system for its facts. The LHS and RHS facts of the CLIPS rules are mapped into the input and output layers of the ANES, respectively; and the inference engine of the rules is imbedded in the hidden layer. The fact representation by OAC triples gives a natural grouping of the rules. These features allow the ANES system to automate rule-generation, and make it efficient to execute and easy to expand for a large and complex domain.

  19. Competitive Debate as Competency-Based Learning: Civic Engagement and Next-Generation Assessment in the Era of the Common Core Learning Standards

    ERIC Educational Resources Information Center

    McIntosh, Jonathan; Milam, Myra

    2016-01-01

    As the adoption and execution of the Common Core State Standards (CCSS) have steadily increased, the debate community is presented with an opportunity to be more forward thinking and sustainable through the translation to curriculum planning and next-generation assessment as a movement towards Performance-Based Assessments. This paper focuses on…

  20. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  1. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S; Jha, Shantenu; Weissman, Jon

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large-scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable and interoperablemore » distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  2. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weissman, Jon; Katz, Dan; Jha, Shantenu

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable andmore » interoperable distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  3. Maternal IL-6 during pregnancy can be estimated from newborn brain connectivity and predicts future working memory in offspring.

    PubMed

    Rudolph, Marc D; Graham, Alice M; Feczko, Eric; Miranda-Dominguez, Oscar; Rasmussen, Jerod M; Nardos, Rahel; Entringer, Sonja; Wadhwa, Pathik D; Buss, Claudia; Fair, Damien A

    2018-05-01

    Several lines of evidence support the link between maternal inflammation during pregnancy and increased likelihood of neurodevelopmental and psychiatric disorders in offspring. This longitudinal study seeks to advance understanding regarding implications of systemic maternal inflammation during pregnancy, indexed by plasma interleukin-6 (IL-6) concentrations, for large-scale brain system development and emerging executive function skills in offspring. We assessed maternal IL-6 during pregnancy, functional magnetic resonance imaging acquired in neonates, and working memory (an important component of executive function) at 2 years of age. Functional connectivity within and between multiple neonatal brain networks can be modeled to estimate maternal IL-6 concentrations during pregnancy. Brain regions heavily weighted in these models overlap substantially with those supporting working memory in a large meta-analysis. Maternal IL-6 also directly accounts for a portion of the variance of working memory at 2 years of age. Findings highlight the association of maternal inflammation during pregnancy with the developing functional architecture of the brain and emerging executive function.

  4. Control networks and hubs.

    PubMed

    Gratton, Caterina; Sun, Haoxin; Petersen, Steven E

    2018-03-01

    Executive control functions are associated with frontal, parietal, cingulate, and insular brain regions that interact through distributed large-scale networks. Here, we discuss how fMRI functional connectivity can shed light on the organization of control networks and how they interact with other parts of the brain. In the first section of our review, we present convergent evidence from fMRI functional connectivity, activation, and lesion studies that there are multiple dissociable control networks in the brain with distinct functional properties. In the second section, we discuss how graph theoretical concepts can help illuminate the mechanisms by which control networks interact with other brain regions to carry out goal-directed functions, focusing on the role of specialized hub regions for mediating cross-network interactions. Again, we use a combination of functional connectivity, lesion, and task activation studies to bolster this claim. We conclude that a large-scale network perspective provides important neurobiological constraints on the neural underpinnings of executive control, which will guide future basic and translational research into executive function and its disruption in disease. © 2017 Society for Psychophysiological Research.

  5. 75 FR 79328 - Technical Corrections to the Standards Applicable to Generators of Hazardous Waste; Alternative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-20

    ..., 541710 Research and Development in the Physical, Engineering, and Life Sciences. 54172, 541720 Research and Development in the Social Sciences and Humanities. III. Statutory and Executive Order Reviews For...

  6. Tdp studies and tests for C. A. Energia Electrica de Venezuela (enelven) at planta ramon laguna, units RL-17 and RL-10. Volume 1. Executive summary, RL-17 test report, and gas conversion proposals. Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-03-28

    The study, conducted by Babcock and Wilcox, was funded by the U.S. Trade and Development agency on behalf of Enelven. In order to maximize generated power output and minimize operating costs at Planta Ramon Laguna, tests were done to evaluate the condition of equipment at the plant. In order to identify any damage and determine the operating output of each unit, assessments were done of the furnaces, boilers, generators and boiler feed pumps being used in the plant. The report presents the results of these tests. This is the first of three volumes and it is divided into the followingmore » sections: (1) Executive Summary; (2) Hydrogen Damage Assessment; (3) RL-17 Gas Conversion Proposal; (4) RL-10 and RL-11 Gas Conversion Proposals.« less

  7. Life begins at 40

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, L.

    1995-12-31

    A new rule makes nuclear power plant license renewal a viable option. A small group of corporate executives will soon face one of the toughest decisions of their careers-a decision that will affect 17 million American homes. Forty-five commercial nuclear power plants will reach the end of their operating licenses early in the next century. They represent billions of dollars in capital investment, and the companies that own them must decide whether to keep them on the grid or scrap them. But before a company decides whether to pull the plug on a big generating plant, it will have tomore » do some homework. Company executives will have to roll up their sleeves and dig deep into projections of electricity demand, assessments of generating options and cold, hard economics. At the same time, they must keep wary eyes on the political landscape, scanning ahead for roadblocks and quicksand.« less

  8. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  9. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  10. Resolving task rule incongruence during task switching by competitor rule suppression.

    PubMed

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-07-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an experiment involving 2 object classification tasks and 2 location classification tasks, the authors show that irrelevant task rules that generate response conflicts are inhibited. This competitor rule suppression (CRS) is seen in response slowing in subsequent trials, when the competing rules become relevant. CRS is shown to operate on specific rules without affecting similar rules. CRS and backward inhibition, which is another inhibitory phenomenon, produced additive effects on reaction time, suggesting their mutual independence. Implications for current formal theories of task switching as well as for conflict monitoring theories are discussed. (c) 2010 APA, all rights reserved

  11. Automatic Earth observation data service based on reusable geo-processing workflow

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min

    2008-12-01

    A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.

  12. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  13. Functional Magnetic Resonance Imaging of Working Memory and Executive Dysfunction in Systemic Lupus Erythematosus and Antiphospholipid Antibody-Positive Patients.

    PubMed

    Kozora, E; Uluğ, A M; Erkan, D; Vo, A; Filley, C M; Ramon, G; Burleson, A; Zimmerman, R; Lockshin, M D

    2016-11-01

    Standardized cognitive tests and functional magnetic resonance imaging (fMRI) studies of systemic lupus erythematosus (SLE) patients demonstrate deficits in working memory and executive function. These neurobehavioral abnormalities are not well studied in antiphospholipid syndrome, which may occur independently of or together with SLE. This study compares an fMRI paradigm involving motor skills, working memory, and executive function in SLE patients without antiphospholipid antibody (aPL) (the SLE group), aPL-positive non-SLE patients (the aPL-positive group), and controls. Brain MRI, fMRI, and standardized cognitive assessment results were obtained from 20 SLE, 20 aPL-positive, and 10 healthy female subjects with no history of neuropsychiatric abnormality. Analysis of fMRI data showed no differences in performance across groups on bilateral motor tasks. When analysis of variance was used, significant group differences were found in 2 executive function tasks (word generation and word rhyming) and in a working memory task (N-Back). Patients positive for aPL demonstrated higher activation in bilateral frontal, temporal, and parietal cortices compared to controls during working memory and executive function tasks. SLE patients also demonstrated bilateral frontal and temporal activation during working memory and executive function tasks. Compared to controls, both aPL-positive and SLE patients had elevated cortical activation, primarily in the frontal lobes, during tasks involving working memory and executive function. These findings are consistent with cortical overactivation as a compensatory mechanism for early white matter neuropathology in these disorders. © 2016, American College of Rheumatology.

  14. Tell me twice: A multi-study analysis of the functional connectivity between the cerebrum and cerebellum after repeated trait information.

    PubMed

    Van Overwalle, Frank; Heleven, Elien; Ma, Ning; Mariën, Peter

    2017-01-01

    This multi-study analysis (6 fMRI studies; 142 participants) explores the functional activation and connectivity of the cerebellum with the cerebrum during repeated behavioral information uptake informing about personality traits of different persons. The results suggest that trait repetition recruits activity in areas belonging to the mentalizing and executive control networks in the cerebrum, and the executive control areas in the cerebellum. Cerebral activation was observed in the executive control network including the posterior medial frontal cortex (pmFC), the bilateral prefrontal cortex (PFC) and bilateral inferior parietal cortex (IPC), in the mentalizing network including the bilateral middle temporal cortex (MTC) extending to the right superior temporal cortex (STC), as well as in the visual network including the left cuneus (Cun) and the left inferior occipital cortex. Moreover, cerebellar activation was found bilaterally in lobules VI and VII belonging to the executive control network. Importantly, significant patterns of functional connectivity were found linking these cerebellar executive areas with cerebral executive areas in the medial pmFC, the left PFC and the left IPC, and mentalizing areas in the left MTC. In addition, connectivity was found between the cerebral areas in the left hemisphere involved in the executive and mentalizing networks, as well as with their homolog areas in the right hemisphere. The discussion centers on the role of these cerebello-cerebral connections in matching internal predictions generated by the cerebellum with external information from the cerebrum, presumably involving the sequencing of behaviors. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. An evaluation of very large airplanes and alternative fuels: executive summary. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikolowsky, W.T.

    1976-12-01

    Candidate applications for very large airplanes include strategic airlifter, tanker, missile launchers, tactical battle platforms, maritime air cruisers, and C3 platforms. This report summarizes AD-A040 532 which explored the military utility of very large airplanes (over 1 million pounds gross weight) and examined several alternative fuels that could be used by such airplanes.

  16. Autonomous mission management for UAVs using soar intelligent agents

    NASA Astrophysics Data System (ADS)

    Gunetti, Paolo; Thompson, Haydn; Dodd, Tony

    2013-05-01

    State-of-the-art unmanned aerial vehicles (UAVs) are typically able to autonomously execute a pre-planned mission. However, UAVs usually fly in a very dynamic environment which requires dynamic changes to the flight plan; this mission management activity is usually tasked to human supervision. Within this article, a software system that autonomously accomplishes the mission management task for a UAV will be proposed. The system is based on a set of theoretical concepts which allow the description of a flight plan and implemented using a combination of Soar intelligent agents and traditional control techniques. The system is capable of automatically generating and then executing an entire flight plan after being assigned a set of objectives. This article thoroughly describes all system components and then presents the results of tests that were executed using a realistic simulation environment.

  17. Functional MRI evidence for the decline of word retrieval and generation during normal aging.

    PubMed

    Baciu, M; Boudiaf, N; Cousin, E; Perrone-Bertolotti, M; Pichat, C; Fournet, N; Chainay, H; Lamalle, L; Krainik, A

    2016-02-01

    This fMRI study aimed to explore the effect of normal aging on word retrieval and generation. The question addressed is whether lexical production decline is determined by a direct mechanism, which concerns the language operations or is rather indirectly induced by a decline of executive functions. Indeed, the main hypothesis was that normal aging does not induce loss of lexical knowledge, but there is only a general slowdown in retrieval mechanisms involved in lexical processing, due to possible decline of the executive functions. We used three tasks (verbal fluency, object naming, and semantic categorization). Two groups of participants were tested (Young, Y and Aged, A), without cognitive and psychiatric impairment and showing similar levels of vocabulary. Neuropsychological testing revealed that older participants had lower executive function scores, longer processing speeds, and tended to have lower verbal fluency scores. Additionally, older participants showed higher scores for verbal automatisms and overlearned information. In terms of behavioral data, older participants performed as accurate as younger adults, but they were significantly slower for the semantic categorization and were less fluent for verbal fluency task. Functional MRI analyses suggested that older adults did not simply activate fewer brain regions involved in word production, but they actually showed an atypical pattern of activation. Significant correlations between the BOLD (Blood Oxygen Level Dependent) signal of aging-related (A > Y) regions and cognitive scores suggested that this atypical pattern of the activation may reveal several compensatory mechanisms (a) to overcome the slowdown in retrieval, due to the decline of executive functions and processing speed and (b) to inhibit verbal automatic processes. The BOLD signal measured in some other aging-dependent regions did not correlate with the behavioral and neuropsychological scores, and the overactivation of these uncorrelated regions would simply reveal dedifferentiation that occurs with aging. Altogether, our results suggest that normal aging is associated with a more difficult access to lexico-semantic operations and representations by a slowdown in executive functions, without any conceptual loss.

  18. Impairment of cognitive functioning during Sunitinib or Sorafenib treatment in cancer patients: a cross sectional study

    PubMed Central

    2014-01-01

    Background Impairment of cognitive functioning has been reported in several studies in patients treated with chemotherapy. So far, no studies have been published on the effects of the vascular endothelial growth factor receptor (VEGFR) inhibitors on cognitive functioning. We investigated the objective and subjective cognitive function of patients during treatment with VEGFR tyrosine kinase inhibitors (VEGFR TKI). Methods Three groups of participants, matched on age, sex and education, were enrolled; 1. metastatic renal cell cancer (mRCC) or GIST patients treated with sunitinib or sorafenib (VEGFR TKI patients n = 30); 2. patients with mRCC not receiving systemic treatment (patient controls n = 20); 3. healthy controls (n = 30). Sixteen neuropsychological tests examining the main cognitive domains (intelligence, memory, attention and concentration, executive functions and abstract reasoning) were administered by a neuropsychologist. Four questionnaires were used to assess subjective cognitive complaints, mood, fatigue and psychological wellbeing. Results No significant differences in mean age, sex distribution, education level or IQ were found between the three groups. Both patient groups performed significantly worse on the cognitive domains Learning & Memory and Executive Functions (Response Generation and Problem Solving) compared to healthy controls. However only the VEGFR TKI patients showed impairments on the Executive subdomain Response Generation. Effect sizes of cognitive dysfunction in patients using VEGFR TKI were larger on the domains Learning & Memory and Executive Functions, compared to patient controls. Both patients groups performed on the domain Attention & Concentration the same as the healthy controls. Longer duration of treatment on VEGFR TKI was associated with a worse score on Working Memory tasks. Conclusions Our data suggest that treatment with VEGFR TKI has a negative impact on cognitive functioning, specifically on Learning & Memory, and Executive Functioning. We propose that patients who are treated with VEGFR TKI are monitored and informed for possible signs or symptoms associated with cognitive impairment. Trial registration ClinicalTrials.gov Identifier: NCT01246843. PMID:24661373

  19. Executive Functioning Heterogeneity in Pediatric ADHD.

    PubMed

    Kofler, Michael J; Irwin, Lauren N; Soto, Elia F; Groves, Nicole B; Harmon, Sherelle L; Sarver, Dustin E

    2018-04-28

    Neurocognitive heterogeneity is increasingly recognized as a valid phenomenon in ADHD, with most estimates suggesting that executive dysfunction is present in only about 33%-50% of these children. However, recent critiques question the veracity of these estimates because our understanding of executive functioning in ADHD is based, in large part, on data from single tasks developed to detect gross neurological impairment rather than the specific executive processes hypothesized to underlie the ADHD phenotype. The current study is the first to comprehensively assess heterogeneity in all three primary executive functions in ADHD using a criterion battery that includes multiple tests per construct (working memory, inhibitory control, set shifting). Children ages 8-13 (M = 10.37, SD = 1.39) with and without ADHD (N = 136; 64 girls; 62% Caucasian/Non-Hispanic) completed a counterbalanced series of executive function tests. Accounting for task unreliability, results indicated significantly improved sensitivity and specificity relative to prior estimates, with 89% of children with ADHD demonstrating objectively-defined impairment on at least one executive function (62% impaired working memory, 27% impaired inhibitory control, 38% impaired set shifting; 54% impaired on one executive function, 35% impaired on two or all three executive functions). Children with working memory deficits showed higher parent- and teacher-reported ADHD inattentive and hyperactive/impulsive symptoms (BF 10  = 5.23 × 10 4 ), and were slightly younger (BF 10  = 11.35) than children without working memory deficits. Children with vs. without set shifting or inhibitory control deficits did not differ on ADHD symptoms, age, gender, IQ, SES, or medication status. Taken together, these findings confirm that ADHD is characterized by neurocognitive heterogeneity, while suggesting that contemporary, cognitively-informed criteria may provide improved precision for identifying a smaller number of neuropsychologically-impaired subtypes than previously described.

  20. Quality control system preparation for photogrammetric and laser scanning missions of Spanish national plan of aerial orthophotogpaphy (PNOA). (Polish Title: Opracowanie systemu kontroli jakości realizacji nalotów fotogrametrycznych i skaningowych dla hiszpańskiego narodowego planu ortofotomapy lotniczej (PNOA))

    NASA Astrophysics Data System (ADS)

    Rzonca, A.

    2013-12-01

    The paper presents the state of the art of quality control of photogrammetric and laser scanning data captured by airborne sensors. The described subject is very important for photogrammetric and LiDAR project execution, because the data quality a prior decides about the final product quality. On the other hand, precise and effective quality control process allows to execute the missions without wide margin of safety, especially in case of the mountain areas projects. For introduction, the author presents theoretical background of the quality control, basing on his own experience, instructions and technical documentation. He describes several variants of organization solutions. Basically, there are two main approaches: quality control of the captured data and the control of discrepancies of the flight plan and its results of its execution. Both of them are able to use test of control and analysis of the data. The test is an automatic algorithm controlling the data and generating the control report. Analysis is a less complicated process, that is based on documentation, data and metadata manual check. The example of quality control system for large area project was presented. The project is being realized periodically for the territory of all Spain and named National Plan of Aerial Orthophotography (Plan Nacional de Ortofotografía Aérea, PNOA). The system of the internal control guarantees its results soon after the flight and informs the flight team of the company. It allows to correct all the errors shortly after the flight and it might stop transferring the data to another team or company, for further data processing. The described system of data quality control contains geometrical and radiometrical control of photogrammetric data and geometrical control of LiDAR data. According to all specified parameters, it checks all of them and generates the reports. They are very helpful in case of some errors or low quality data. The paper includes the author experience in the field of data quality control, presents the conclusions and suggestions of the organization and technical aspects, with a short definition of the necessary control software.

  1. Towards Scalable Deep Learning via I/O Analysis and Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pumma, Sarunya; Si, Min; Feng, Wu-Chun

    Deep learning systems have been growing in prominence as a way to automatically characterize objects, trends, and anomalies. Given the importance of deep learning systems, researchers have been investigating techniques to optimize such systems. An area of particular interest has been using large supercomputing systems to quickly generate effective deep learning networks: a phase often referred to as “training” of the deep learning neural network. As we scale existing deep learning frameworks—such as Caffe—on these large supercomputing systems, we notice that the parallelism can help improve the computation tremendously, leaving data I/O as the major bottleneck limiting the overall systemmore » scalability. In this paper, we first present a detailed analysis of the performance bottlenecks of Caffe on large supercomputing systems. Our analysis shows that the I/O subsystem of Caffe—LMDB—relies on memory-mapped I/O to access its database, which can be highly inefficient on large-scale systems because of its interaction with the process scheduling system and the network-based parallel filesystem. Based on this analysis, we then present LMDBIO, our optimized I/O plugin for Caffe that takes into account the data access pattern of Caffe in order to vastly improve I/O performance. Our experimental results show that LMDBIO can improve the overall execution time of Caffe by nearly 20-fold in some cases.« less

  2. The Role of Ontologies in Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.

    2004-01-01

    Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.

  3. Measuring the construct of executive control in schizophrenia: defining and validating translational animal paradigms for discovery research.

    PubMed

    Gilmour, Gary; Arguello, Alexander; Bari, Andrea; Brown, Verity J; Carter, Cameron; Floresco, Stan B; Jentsch, David J; Tait, David S; Young, Jared W; Robbins, Trevor W

    2013-11-01

    Executive control is an aspect of cognitive function known to be impaired in schizophrenia. Previous meetings of the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) group have more precisely defined executive control in terms of two constructs: "rule generation and selection", and "dynamic adjustments of control". Next, human cognitive tasks that may effectively measure performance with regard to these constructs were identified to be developed into practical and reliable measures for use in treatment development. The aim of this round of CNTRICS meetings was to define animal paradigms that have sufficient promise to warrant further investigation for their utility in measuring these constructs. Accordingly, "reversal learning" and the "attentional set-shifting task" were nominated to assess the construct of rule generation and selection, and the "stop signal task" for the construct of dynamic adjustments of control. These tasks are described in more detail here, with a particular focus on their utility for drug discovery efforts. Presently, each assay has strengths and weaknesses with regard to this point and increased emphasis on improving practical aspects of testing, understanding predictive validity, and defining biomarkers of performance represent important objectives in attaining confidence in translational validity here. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  4. A Systematic Approach to Simulating Metabolism in Computational Toxicology. I. The Times Heuristic Modeling Framework

    EPA Science Inventory

    This paper presents a new system for automated 2D-3D migration of chemicals in large databases with conformer multiplication. The main advantages of this system are its straightforward performance, reasonable execution time, simplicity, and applicability to building large 3D che...

  5. Structural performance analysis and redesign

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1978-01-01

    Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.

  6. Educating Executive Function

    PubMed Central

    Blair, Clancy

    2016-01-01

    Executive functions are thinking skills that assist with reasoning, planning, problem solving, and managing one’s life. The brain areas that underlie these skills are interconnected with and influenced by activity in many different brain areas, some of which are associated with emotion and stress. One consequence of the stress-specific connections is that executive functions, which help us to organize our thinking, tend to be disrupted when stimulation is too high and we are stressed out, or too low when we are bored and lethargic. Given their central role in reasoning and also in managing stress and emotion, scientists have conducted studies, primarily with adults, to determine whether executive functions can be improved by training. By and large, results have shown that they can be, in part through computer-based videogame-like activities. Evidence of wider, more general benefits from such computer-based training, however, is mixed. Accordingly, scientists have reasoned that training will have wider benefits if it is implemented early, with very young children as the neural circuitry of executive functions is developing, and that it will be most effective if embedded in children’s everyday activities. Evidence produced by this research, however, is also mixed. In sum, much remains to be learned about executive function training. Without question, however, continued research on this important topic will yield valuable information about cognitive development. PMID:27906522

  7. Self-reports of executive dysfunction in current ecstasy/polydrug Users.

    PubMed

    Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Bridges, Nikola

    2012-09-01

    Ecstasy/polydrug users have exhibited deficits in executive functioning in laboratory tests. We sought to extend these findings by investigating the extent to which ecstasy/polydrug users manifest executive deficits in everyday life. Forty-two current ecstasy/polydrug users, 18 previous (abstinent for at least 6 months) ecstasy/polydrug users, and 50 non-users of ecstasy (including both non-users of any illicit drug and some cannabis-only users) completed the self-report Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A) measure. Current ecstasy/polydrug users performed significantly worse than previous users and non-users on subscales measuring inhibition, self-monitoring, initiating action, working memory, planning, monitoring ongoing task performance, and organizational ability. Previous ecstasy/polydrug users did not differ significantly from non-users. In regression analyses, although the current frequency of ecstasy use accounted for statistically significant unique variance on 3 of the 9 BRIEF-A subscales, daily cigarette consumption was the main predictor in 6 of the subscales. Current ecstasy/polydrug users report more executive dysfunction than do previous users and non-users. This finding appears to relate to some aspect of ongoing ecstasy use and seems largely unrelated to the use of other illicit drugs. An unexpected finding was the association of current nicotine consumption with executive dysfunction.

  8. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  9. Sharpening the intangibles edge.

    PubMed

    Lev, Baruch

    2004-06-01

    Intangible assets--patents and know-how, brands, a skilled workforce, strong customer relationships, software, unique processes and organizational designs, and the like--generate most of a company's growth and shareholder value. Yet extensive research indicates that investors systematically misprice the shares of intangibles-intensive enterprises. Clearly, overpricing wastes capital. But underpricing raises the cost of capital, hamstringing executives in their efforts to take advantage of further growth opportunities. How do you break this vicious cycle? By generating better information about your investments in intangibles, and by disclosing at least some of that data to the capital markets. Getting at that information is easier said than done, however. There are no markets generating visible prices for intellectual capital, brands, or human capital to assist investors in correctly valuing intangibles-intensive companies. And current accounting practices lump funds spent on intangibles with general expenses, so that investors and executives don't even know how much is being invested in them, let alone what a return on those investments might be. At the very least, companies should break out the amounts spent on intangibles and disclose them to the markets. More fundamentally, executives should start thinking of intangibles not as costs but as assets, so that they are recognized as investments whose returns are identified and monitored. The proposals laid down in this article are only a beginning, the author stresses. Corporations and accounting bodies should make systematic efforts to develop information that can reliably reflect the unique attributes of intangible assets. The current serious misallocations of resources should be incentive enough for businesses to join--and even lead--such developments.

  10. Random Number Generation and Executive Functions in Parkinson's Disease: An Event-Related Brain Potential Study.

    PubMed

    Münte, Thomas F; Joppich, Gregor; Däuper, Jan; Schrader, Christoph; Dengler, Reinhard; Heldmann, Marcus

    2015-01-01

    The generation of random sequences is considered to tax executive functions and has been reported to be impaired in Parkinson's disease (PD) previously. To assess the neurophysiological markers of random number generation in PD. Event-related potentials (ERP) were recorded in 12 PD patients and 12 age-matched normal controls (NC) while either engaging in random number generation (RNG) by pressing the number keys on a computer keyboard in a random sequence or in ordered number generation (ONG) necessitating key presses in the canonical order. Key presses were paced by an external auditory stimulus at a rate of 1 tone every 1800 ms. As a secondary task subjects had to monitor the tone-sequence for a particular target tone to which the number "0" key had to be pressed. This target tone occurred randomly and infrequently, thus creating a secondary oddball task. Behaviorally, PD patients showed an increased tendency to count in steps of one as well as a tendency towards repetition avoidance. Electrophysiologically, the amplitude of the P3 component of the ERP to the target tone of the secondary task was reduced during RNG in PD but not in NC. The behavioral findings indicate less random behavior in PD while the ERP findings suggest that this impairment comes about, because attentional resources are depleted in PD.

  11. ZIP2DL: An Elastic-Plastic, Large-Rotation Finite-Element Stress Analysis and Crack-Growth Simulation Program

    NASA Technical Reports Server (NTRS)

    Deng, Xiaomin; Newman, James C., Jr.

    1997-01-01

    ZIP2DL is a two-dimensional, elastic-plastic finte element program for stress analysis and crack growth simulations, developed for the NASA Langley Research Center. It has many of the salient features of the ZIP2D program. For example, ZIP2DL contains five material models (linearly elastic, elastic-perfectly plastic, power-law hardening, linear hardening, and multi-linear hardening models), and it can simulate mixed-mode crack growth for prescribed crack growth paths under plane stress, plane strain and mixed state of stress conditions. Further, as an extension of ZIP2D, it also includes a number of new capabilities. The large-deformation kinematics in ZIP2DL will allow it to handle elastic problems with large strains and large rotations, and elastic-plastic problems with small strains and large rotations. Loading conditions in terms of surface traction, concentrated load, and nodal displacement can be applied with a default linear time dependence or they can be programmed according to a user-defined time dependence through a user subroutine. The restart capability of ZIP2DL will make it possible to stop the execution of the program at any time, analyze the results and/or modify execution options and resume and continue the execution of the program. This report includes three sectons: a theoretical manual section, a user manual section, and an example manual secton. In the theoretical secton, the mathematics behind the various aspects of the program are concisely outlined. In the user manual section, a line-by-line explanation of the input data is given. In the example manual secton, three types of examples are presented to demonstrate the accuracy and illustrate the use of this program.

  12. Role of optimization in the human dynamics of task execution

    NASA Astrophysics Data System (ADS)

    Cajueiro, Daniel O.; Maldonado, Wilfredo L.

    2008-03-01

    In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.

  13. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  14. Influence of task switching costs on colony homeostasis

    NASA Astrophysics Data System (ADS)

    Jeanson, Raphaël; Lachaud, Jean-Paul

    2015-06-01

    In social insects, division of labour allows colonies to optimise the allocation of workers across all available tasks to satisfy colony requirements. The maintenance of stable conditions within colonies (homeostasis) requires that some individuals move inside the nest to monitor colony needs and execute unattended tasks. We developed a simple theoretical model to explore how worker mobility inside the nest and task switching costs influence the maintenance of stable levels of task-associated stimuli. Our results indicate that worker mobility in large colonies generates important task switching costs and is detrimental to colony homeostasis. Our study suggests that the balance between benefits and costs associated with the mobility of workers patrolling inside the nest depends on colony size. We propose that several species of ants with diverse life-history traits should be appropriate to test the prediction that the proportion of mobile workers should vary during colony ontogeny.

  15. Exploration Planetary Surface Structural Systems: Design Requirements and Compliance

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.

    2011-01-01

    The Lunar Surface Systems Project developed system concepts that would be necessary to establish and maintain a permanent human presence on the Lunar surface. A variety of specific system implementations were generated as a part of the scenarios, some level of system definition was completed, and masses estimated for each system. Because the architecture studies generally spawned a large number of system concepts and the studies were executed in a short amount of time, the resulting system definitions had very low design fidelity. This paper describes the development sequence required to field a particular structural system: 1) Define Requirements, 2) Develop the Design and 3) Demonstrate Compliance of the Design to all Requirements. This paper also outlines and describes in detail the information and data that are required to establish structural design requirements and outlines the information that would comprise a planetary surface system Structures Requirements document.

  16. Deriving Tools from Real-Time Runs: A New CCMC Support for SEC and AFWA

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2007-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-on-request" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities at the Space Environment Center, or at the Air Force Weather Agency.

  17. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  18. Poised Regeneration of Zebrafish Melanocytes Involves Direct Differentiation and Concurrent Replenishment of Tissue-Resident Progenitor Cells.

    PubMed

    Iyengar, Sharanya; Kasheta, Melissa; Ceol, Craig J

    2015-06-22

    Efficient regeneration following injury is critical for maintaining tissue function and enabling organismal survival. Cells reconstituting damaged tissue are often generated from resident stem or progenitor cells or from cells that have dedifferentiated and become proliferative. While lineage-tracing studies have defined cellular sources of regeneration in many tissues, the process by which these cells execute the regenerative process is largely obscure. Here, we have identified tissue-resident progenitor cells that mediate regeneration of zebrafish stripe melanocytes and defined how these cells reconstitute pigmentation. Nearly all regeneration melanocytes arise through direct differentiation of progenitor cells. Wnt signaling is activated prior to differentiation, and inhibition of Wnt signaling impairs regeneration. Additional progenitors divide symmetrically to sustain the pool of progenitor cells. Combining direct differentiation with symmetric progenitor divisions may serve as a means to rapidly repair injured tissue while preserving the capacity to regenerate. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.

    2018-04-01

    ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.

  20. Diffusion of massive particles around an Abelian-Higgs string

    NASA Astrophysics Data System (ADS)

    Saha, Abhisek; Sanyal, Soma

    2018-03-01

    We study the diffusion of massive particles in the space time of an Abelian Higgs string. The particles in the early universe plasma execute Brownian motion. This motion of the particles is modeled as a two dimensional random walk in the plane of the Abelian Higgs string. The particles move randomly in the space time of the string according to their geodesic equations. We observe that for certain values of their energy and angular momentum, an overdensity of particles is observed close to the string. We find that the string parameters determine the distribution of the particles. We make an estimate of the density fluctuation generated around the string as a function of the deficit angle. Though the thickness of the string is small, the length is large and the overdensity close to the string may have cosmological consequences in the early universe.

  1. Autonomous navigation and control of a Mars rover

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.

    1990-01-01

    A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.

  2. Planning and Execution for an Autonomous Aerobot

    NASA Technical Reports Server (NTRS)

    Gaines, Daniel M.; Estlin, Tara A.; Schaffer, Steven R.; Chouinard, Caroline M.

    2010-01-01

    The Aerial Onboard Autonomous Science Investigation System (AerOASIS) system provides autonomous planning and execution capabilities for aerial vehicles (see figure). The system is capable of generating high-quality operations plans that integrate observation requests from ground planning teams, as well as opportunistic science events detected onboard the vehicle while respecting mission and resource constraints. AerOASIS allows an airborne planetary exploration vehicle to summarize and prioritize the most scientifically relevant data; identify and select high-value science sites for additional investigation; and dynamically plan, schedule, and monitor the various science activities being performed, even during extended communications blackout periods with Earth.

  3. Generation of control sequences for a pilot-disassembly system

    NASA Astrophysics Data System (ADS)

    Seliger, Guenther; Kim, Hyung-Ju; Keil, Thomas

    2002-02-01

    Closing the product and material cycles has emerged as a paradigm for industry in the 21st century. Disassembly plays a key role in a life cycle economy since it enables the recovery of resources. A partly automated disassembly system should adapt to a large variety of products and different degrees of devaluation. Also the amounts of products to be disassembled can vary strongly. To cope with these demands an approach to generate on-line disassembly control sequences will be presented. In order to react on these demands the technological feasibility is considered within a procedure for the generation of disassembly control sequences. Procedures are designed to find available and technologically feasible disassembly processes. The control system is formed by modularised and parameterised control units in the cell level within the entire control architecture. In the first development stage product and process analyses at the sample product washing machine were executed. Furthermore a generalized disassembly process was defined. Afterwards these processes were structured in primary and secondary functions. In the second stage the disassembly control at the technological level was investigated. Factors were the availability of the disassembly tools and the technological feasibility of the disassembly processes within the disassembly system. Technical alternative disassembly processes are determined as a result of availability of the tools and technological feasibility of processes. The fourth phase was the concept for the generation of the disassembly control sequences. The approach will be proved in a prototypical disassembly system.

  4. S-MMICs: Sub-mm-Wave Transistors and Integrated Circuits

    DTIC Science & Technology

    2008-09-01

    Research Lab BAA DAAD19-03-R-0017 Research area 2.35: RF devices—Dr. Alfred Hung Submitted by: Mark Rodwell, Department of Electrical and Computer ...MOTIVATION / APPLICATION 3 TECHNOLOGY STATUS 4 TRANSISTOR SCALING LAWS 5 256 NM GENERATION 6 HBT POWER AMPLIFIER DEVELOPMENT 7 DRY-ETCHED EMITTER...TECHNOLOGY: 256 NM GENERATION 9 SCALED EPITAXY 11 CONCLUSIONS 12 20081103013 Executive Summary Transistor and power amplifier IC technology was

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Rakesh

    This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. Wemore » also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.« less

  6. Influence of COMT genotype and affective distractors on the processing of self-generated thought.

    PubMed

    Kilford, Emma J; Dumontheil, Iroise; Wood, Nicholas W; Blakemore, Sarah-Jayne

    2015-06-01

    The catechol-O-methyltransferase (COMT) enzyme is a major determinant of prefrontal dopamine levels. The Val(158)Met polymorphism affects COMT enzymatic activity and has been associated with variation in executive function and affective processing. This study investigated the effect of COMT genotype on the flexible modulation of the balance between processing self-generated and processing stimulus-oriented information, in the presence or absence of affective distractors. Analyses included 124 healthy adult participants, who were also assessed on standard working memory (WM) tasks. Relative to Val carriers, Met homozygotes made fewer errors when selecting and manipulating self-generated thoughts. This effect was partly accounted for by an association between COMT genotype and visuospatial WM performance. We also observed a complex interaction between the influence of affective distractors, COMT genotype and sex on task accuracy: male, but not female, participants showed a sensitivity to the affective distractors that was dependent on COMT genotype. This was not accounted for by WM performance. This study provides novel evidence of the role of dopaminergic genetic variation on the ability to select and manipulate self-generated thoughts. The results also suggest sexually dimorphic effects of COMT genotype on the influence of affective distractors on executive function. © The Author (2014). Published by Oxford University Press.

  7. Integration of health management and support systems is key to achieving cost reduction and operational concept goals of the 2nd generation reusable launch vehicle

    NASA Astrophysics Data System (ADS)

    Koon, Phillip L.; Greene, Scott

    2002-07-01

    Our aerospace customers are demanding that we drastically reduce the cost of operating and supporting our products. Our space customer in particular is looking for the next generation of reusable launch vehicle systems to support more aircraft like operation. To achieve this goal requires more than an evolution in materials, processes and systems, what is required is a paradigm shift in the design of the launch vehicles and the processing systems that support the launch vehicles. This paper describes the Automated Informed Maintenance System (AIM) we are developing for NASA's Space Launch Initiative (SLI) Second Generation Reusable Launch Vehicle (RLV). Our system includes an Integrated Health Management (IHM) system for the launch vehicles and ground support systems, which features model based diagnostics and prognostics. Health Management data is used by our AIM decision support and process aids to automatically plan maintenance, generate work orders and schedule maintenance activities along with the resources required to execute these processes. Our system will automate the ground processing for a spaceport handling multiple RLVs executing multiple missions. To accomplish this task we are applying the latest web based distributed computing technologies and application development techniques.

  8. Is semantic verbal fluency impairment explained by executive function deficits in schizophrenia?

    PubMed

    Berberian, Arthur A; Moraes, Giovanna V; Gadelha, Ary; Brietzke, Elisa; Fonseca, Ana O; Scarpato, Bruno S; Vicente, Marcella O; Seabra, Alessandra G; Bressan, Rodrigo A; Lacerda, Acioly L

    2016-04-19

    To investigate if verbal fluency impairment in schizophrenia reflects executive function deficits or results from degraded semantic store or inefficient search and retrieval strategies. Two groups were compared: 141 individuals with schizophrenia and 119 healthy age and education-matched controls. Both groups performed semantic and phonetic verbal fluency tasks. Performance was evaluated using three scores, based on 1) number of words generated; 2) number of clustered/related words; and 3) switching score. A fourth performance score based on the number of clusters was also measured. SZ individuals produced fewer words than controls. After controlling for the total number of words produced, a difference was observed between the groups in the number of cluster-related words generated in the semantic task. In both groups, the number of words generated in the semantic task was higher than that generated in the phonemic task, although a significant group vs. fluency type interaction showed that subjects with schizophrenia had disproportionate semantic fluency impairment. Working memory was positively associated with increased production of words within clusters and inversely correlated with switching. Semantic fluency impairment may be attributed to an inability (resulting from reduced cognitive control) to distinguish target signal from competing noise and to maintain cues for production of memory probes.

  9. Alerting, orienting or executive attention networks: differential patters of pupil dilations

    PubMed Central

    Geva, Ronny; Zivan, Michal; Warsha, Aviv; Olchik, Dov

    2013-01-01

    Attention capacities, alerting responses, orienting to sensory stimulation, and executive monitoring of performance are considered independent yet interrelated systems. These operations play integral roles in regulating the behavior of diverse species along the evolutionary ladder. Each of the primary attention constructs—alerting, orienting, and executive monitoring—involves salient autonomic correlates as evidenced by changes in reactive pupil dilation (PD), heart rate, and skin conductance. Recent technological advances that use remote high-resolution recording may allow the discernment of temporo-spatial attributes of autonomic responses that characterize the alerting, orienting, and executive monitoring networks during free viewing, irrespective of voluntary performance. This may deepen the understanding of the roles of autonomic regulation in these mental operations and may deepen our understanding of behavioral changes in verbal as well as in non-verbal species. The aim of this study was to explore differences between psychosensory PD responses in alerting, orienting, and executive conflict monitoring tasks to generate estimates of concurrent locus coeruleus (LC) noradrenergic input trajectories in healthy human adults using the attention networks test (ANT). The analysis revealed a construct-specific pattern of pupil responses: alerting is characterized by an early component (Pa), its acceleration enables covert orienting, and executive control is evidenced by a prominent late component (Pe). PD characteristics seem to be task-sensitive, allowing exploration of mental operations irrespective of conscious voluntary responses. These data may facilitate development of studies designed to assess mental operations in diverse species using autonomic responses. PMID:24133422

  10. Elevated triglycerides are associated with decreased executive function among adolescents with bipolar disorder.

    PubMed

    Naiberg, M R; Newton, D F; Collins, J E; Dickstein, D P; Bowie, C R; Goldstein, B I

    2016-09-01

    Cardiovascular risk factors that comprise metabolic syndrome (MetS) have been linked with cognition in adults with bipolar disorder (BD). This study examines the association between MetS components and executive function in adolescents with BD. A total of 34 adolescents with BD and 35 healthy control (HC) adolescents were enrolled. MetS components included triglycerides, high-density lipoprotein, glucose, waist circumference, and systolic and diastolic blood pressure. Executive functioning was measured using the intra-extra-dimensional (IED) set-shifting task from the Cambridge Neuropsychological Tests Automated Battery. Adolescents with BD were more likely to have ≥1 MetS components (64.7%) as compared to HC participants (22.9%, χ(2) = 12.29, P = <0.001). Adolescents with BD also had poorer IED task performance compared to HC adolescents (composite Z-score: 0.21 ± 0.52 vs. 0.49 ± 0.51, P = 0.011). Within the BD group, IED composite Z-scores were correlated with diastolic blood pressure and triglyceride levels (ρ = -0.358, P = 0.041 and ρ = -0.396, P = 0.020 respectively). The association of triglycerides with executive function remained significant after controlling for age, IQ, and current use of second-generation antipsychotics. Elevated triglycerides are associated with poorer executive function among adolescents with BD. Studies of behavioural and pharmacological interventions targeting MetS components for the purpose of improving executive function among adolescents with BD are warranted. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Perceptual flexibility is coupled with reduced executive inhibition in students of the visual arts.

    PubMed

    Chamberlain, Rebecca; Swinnen, Lena; Heeren, Sarah; Wagemans, Johan

    2018-05-01

    Artists often report that seeing familiar stimuli in novel and interesting ways plays a role in visual art creation. However, the attentional mechanisms which underpin this ability have yet to be fully investigated. More specifically, it is unclear whether the ability to reinterpret visual stimuli in novel and interesting ways is facilitated by endogenously generated switches of attention, and whether it is linked in turn to executive functions such as inhibition and response switching. To address this issue, the current study explored ambiguous figure reversal and executive function in a sample of undergraduate students studying arts and non-art subjects (N = 141). Art students showed more frequent perceptual reversals in an ambiguous figure task, both when viewing the stimulus passively and when eliciting perceptual reversals voluntarily, but showed no difference from non-art students when asked to actively maintain specific percepts. In addition, art students were worse than non-art students at inhibiting distracting flankers in an executive inhibition task. The findings suggest that art students can elicit endogenous shifts of attention more easily than non-art students but that this faculty is not directly associated with enhanced executive function. It is proposed that the signature of artistic skill may be increased perceptual flexibility accompanied by reduced cognitive inhibition; however, future research will be necessary to determine which particular subskills in the visual arts are linked to aspects of perception and executive function. © 2017 The British Psychological Society.

  12. Habitual instigation and habitual execution: Definition, measurement, and effects on behaviour frequency.

    PubMed

    Gardner, Benjamin; Phillips, L Alison; Judah, Gaby

    2016-09-01

    'Habit' is a process whereby situational cues generate behaviour automatically, via activation of learned cue-behaviour associations. This article presents a conceptual and empirical rationale for distinguishing between two manifestations of habit in health behaviour, triggering selection and initiation of an action ('habitual instigation'), or automating progression through subactions required to complete action ('habitual execution'). We propose that habitual instigation accounts for habit-action relationships, and is the manifestation captured by the Self-Report Habit Index (SRHI), the dominant measure in health psychology. Conceptual analysis and prospective survey. Student participants (N = 229) completed measures of intentions, the original, non-specific SRHI, an instigation-specific SRHI variant, an execution-specific variant, and, 1 week later, behaviour, in three health domains (flossing, snacking, and breakfast consumption). Effects of habitual instigation and execution on behaviour were modelled using regression analyses, with simple slope analysis to test habit-intention interactions. Relationships between instigation, execution, and non-specific SRHI variants were assessed via correlations and factor analyses. The instigation-SRHI was uniformly more predictive of behaviour frequency than the execution-SRHI and corresponded more closely with the original SRHI in correlation and factor analyses. Further, experimental work is needed to separate the impact of the two habit manifestations more rigorously. Nonetheless, findings qualify calls for habit-based interventions by suggesting that behaviour maintenance may be better served by habitual instigation and that disrupting habitual behaviour may depend on overriding habits of instigation. Greater precision of measurement may help to minimize confusion between habitual instigation and execution. Statement of contribution What is already known on this subject? Habit is often used to understand, explain, and change health behaviour. Making behaviour habitual has been proposed as a means of maintaining behaviour change. Concerns have been raised about the extent to which health behaviour can be habitual. What does this study add? A conceptual and empirical rationale for discerning habitually instigated and habitually executed behaviour. Results show habit-behaviour effects are mostly attributable to habitual instigation, not execution. The most common habit measure, the Self-Report Habit Index, measures habitual instigation, not execution. © 2016 The British Psychological Society.

  13. Vestibulospinal control of reflex and voluntary head movement

    NASA Technical Reports Server (NTRS)

    Boyle, R.; Peterson, B. W. (Principal Investigator)

    2001-01-01

    Secondary canal-related vestibulospinal neurons respond to an externally applied movement of the head in the form of a firing rate modulation that encodes the angular velocity of the movement, and reflects in large part the input "head velocity in space" signal carried by the semicircular canal afferents. In addition to the head velocity signal, the vestibulospinal neurons can carry a more processed signal that includes eye position or eye velocity, or both (see Boyle on ref. list). To understand the control signals used by the central vestibular pathways in the generation of reflex head stabilization, such as the vestibulocollic reflex (VCR), and the maintenance of head posture, it is essential to record directly from identified vestibulospinal neurons projecting to the cervical spinal segments in the alert animal. The present report discusses two key features of the primate vestibulospinal system. First, the termination morphology of vestibulospinal axons in the cervical segments of the spinal cord is described to lay the structural basis of vestibulospinal control of head/neck posture and movement. And second, the head movement signal content carried by the same class of secondary vestibulospinal neurons during the actual execution of the VCR and during self-generated, or active, rapid head movements is presented.

  14. Software for Managing Parametric Studies

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian

    2003-01-01

    The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.

  15. Spatial and temporal modulation of joint stiffness during multijoint movement.

    PubMed

    Mah, C D

    2001-02-01

    Joint stiffness measurements during small transient perturbations have suggested that stiffness during movement is different from that observed during posture. These observations are problematic for theories like the classical equilibrium point hypothesis, which suggest that desired trajectories during movement are enforced by joint stiffness. We measured arm impedances during large, slow perturbations to obtain detailed information about the spatial and temporal modulation of stiffness and viscosity during movement. While our measurements of stiffness magnitudes during movement generally agreed with the results of measurements using fast perturbations, they revealed that joint stiffness undergoes stereotyped changes in magnitude and aspect ratio which depend on the direction of movement and show a strong dependence on joint angles. Movement simulations using measured parameters show that the measured modulation of impedance acts as an energy conserving force field to constrain movement. This mechanism allows for a computationally simplified account of the execution of multijoint movement. While our measurements do not rule out a role for afferent feedback in force generation, the observed stereotyped restoring forces can allow a dramatic relaxation of the accuracy requirements for forces generated by other control mechanisms, such as inverse dynamical models.

  16. The Stroop color-word test: influence of age, sex, and education; and normative data for a large sample across the adult age range.

    PubMed

    Van der Elst, Wim; Van Boxtel, Martin P J; Van Breukelen, Gerard J P; Jolles, Jelle

    2006-03-01

    The Stroop Color-Word Test was administered to 1,856 cognitively screened, healthy Dutch speaking participants aged 24 to 81 years. The effects of age, gender, and education on Stroop test performance were investigated to adequately stratify the normative data. The results showed that especially the speed-dependent Stroop scores (time to complete a subtest), rather than the accuracy measures (the errors made per Stroop sub-task), were profoundly affected by the demographic variables. In addition to the main effects of the demographic variables, an Age Low Level of Education interaction was found for the Error III and the Stroop Interference scores. This suggests that executive function, as measured by the Stroop test, declines with age and that the decline is more pronounced in people with a low level of education. This is consistent with the reserve hypothesis of brain aging (i.e., that education generates reserve capacity against the damaging effects of aging on brain functions). Normative Stroop data were established using both a regression-based and traditional approach, and the appropriateness of both methods for generating normative data is discussed.

  17. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  18. Echo™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Dustin Yewell

    Echo™ is a MATLAB-based software package designed for robust and scalable analysis of complex data workflows. An alternative to tedious, error-prone conventional processes, Echo is based on three transformative principles for data analysis: self-describing data, name-based indexing, and dynamic resource allocation. The software takes an object-oriented approach to data analysis, intimately connecting measurement data with associated metadata. Echo operations in an analysis workflow automatically track and merge metadata and computation parameters to provide a complete history of the process used to generate final results, while automated figure and report generation tools eliminate the potential to mislabel those results. History reportingmore » and visualization methods provide straightforward auditability of analysis processes. Furthermore, name-based indexing on metadata greatly improves code readability for analyst collaboration and reduces opportunities for errors to occur. Echo efficiently manages large data sets using a framework that seamlessly allocates resources such that only the necessary computations to produce a given result are executed. Echo provides a versatile and extensible framework, allowing advanced users to add their own tools and data classes tailored to their own specific needs. Applying these transformative principles and powerful features, Echo greatly improves analyst efficiency and quality of results in many application areas.« less

  19. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  20. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

Top