Sample records for automatically writing code

  1. The Contributions of Vocabulary and Letter Writing Automaticity to Word Reading and Spelling for Kindergartners

    ERIC Educational Resources Information Center

    Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana

    2014-01-01

    In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…

  2. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  3. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  4. A plug-in to Eclipse for VHDL source codes: functionalities

    NASA Astrophysics Data System (ADS)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  5. Students' performance in phonological awareness, rapid naming, reading, and writing.

    PubMed

    Capellini, Simone Aparecida; Lanza, Simone Cristina

    2010-01-01

    phonological awareness, rapid naming, reading and writing in students with learning difficulties of a municipal public school. to characterize and compare the performance of students from public schools with and without learning difficulties in phonological awareness, rapid naming, reading and writing. participants were 60 students from the 2nd to the 4th grades of municipal public schools divided into 6 groups. Each group was composed by 10 students, being 3 groups of students without learning difficulties and 3 groups with students with learning difficulties. As testing procedure phonological awareness, rapid automatized naming, oral reading and writing under dictation assessments were used. the results highlighted the better performance of students with no learning difficulties. Students with learning difficulties presented a higher ratios considering time/speed in rapid naming tasks and, consequently, lower production in activities of phonological awareness and reading and writing, when compared to students without learning difficulties. students with learning difficulties presented deficits when considering the relationship between naming and automatization skills, and among lexical access, visual discrimination, stimulus frequency use and competition in using less time for code naming, i.e. necessary for the phoneme-grapheme conversion process required in the reading and writing alphabetic system like the Portuguese language.

  6. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

    DTIC Science & Technology

    2014-03-27

    and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

  7. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  8. Data-Driven Hint Generation in Vast Solution Spaces: A Self-Improving Python Programming Tutor

    ERIC Educational Resources Information Center

    Rivers, Kelly; Koedinger, Kenneth R.

    2017-01-01

    To provide personalized help to students who are working on code-writing problems, we introduce a data-driven tutoring system, ITAP (Intelligent Teaching Assistant for Programming). ITAP uses state abstraction, path construction, and state reification to automatically generate personalized hints for students, even when given states that have not…

  9. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  10. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  11. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  12. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  13. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  14. Using suggestion to model different types of automatic writing.

    PubMed

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled. Copyright © 2014. Published by Elsevier Inc.

  15. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  16. Handwriting Automaticity and Writing Instruction in Australian Kindergarten: An Exploratory Study

    ERIC Educational Resources Information Center

    Malpique, Anabela Abreu; Pino-Pasternak, Deborah; Valcan, Debora

    2017-01-01

    Accumulating evidence indicates handwriting automaticity is related to the development of effective writing skills. The present study examined the levels of handwriting automaticity of Australian children at the end of kindergarten and the amount and type of writing instruction they experienced before entering first grade. The current study…

  17. Studies in Historical Replication in Psychology IV: An Inquiry into the Psychological Research and Life of Gertrude Stein

    NASA Astrophysics Data System (ADS)

    Sirrine, Nicole K.; McCarthy, Shauna K.

    2008-05-01

    Gertrude Stein (1874 1946) is well known as an early twentieth century writer, but less well known is her involvement in automatic writing research. Critics of Stein’s literary works suggest that her research had a significant influence on her poetry and fiction, though Stein denied any influence. A partial replication of Stein’s 1896 study was conducted with the goal of addressing three historical questions: (1) What contributed to Stein’s involvement in automatic writing research?; (2) To what extent did Stein believe that she experienced automatic writing?; and (3) To what extent did her automatic writing research influence her later literary works?

  18. How Do Movements to Produce Letters Become Automatic during Writing Acquisition? Investigating the Development of Motor Anticipation

    ERIC Educational Resources Information Center

    Kandel, Sonia; Perret, Cyril

    2015-01-01

    Learning how to write involves the automation of grapho-motor skills. One of the factors that determine automaticity is "motor anticipation." This is the ability to write a letter while processing information on how to produce following letters. It is essential for writing fast and smoothly. We investigated how motor anticipation…

  19. Experimental research on showing automatic disappearance pen handwriting based on spectral imaging technology

    NASA Astrophysics Data System (ADS)

    Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing

    2016-10-01

    Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.

  20. Analyzing Language in Suicide Notes and Legacy Tokens.

    PubMed

    Egnoto, Michael J; Griffin, Darrin J

    2016-03-01

    Identifying precursors that will aid in the discovery of individuals who may harm themselves or others has long been a focus of scholarly research. This work set out to determine if it is possible to use the legacy tokens of active shooters and notes left from individuals who completed suicide to uncover signals that foreshadow their behavior. A total of 25 suicide notes and 21 legacy tokens were compared with a sample of over 20,000 student writings for a preliminary computer-assisted text analysis to determine what differences can be coded with existing computer software to better identify students who may commit self-harm or harm to others. The results support that text analysis techniques with the Linguistic Inquiry and Word Count (LIWC) tool are effective for identifying suicidal or homicidal writings as distinct from each other and from a variety of student writings in an automated fashion. Findings indicate support for automated identification of writings that were associated with harm to self, harm to others, and various other student writing products. This work begins to uncover the viability or larger scale, low cost methods of automatic detection for individuals suffering from harmful ideation.

  1. Automatic Scaffolding and Measurement of Concept Mapping for EFL Students to Write Summaries

    ERIC Educational Resources Information Center

    Yang, Yu-Fen

    2015-01-01

    An incorrect concept map may obstruct a student's comprehension when writing summaries if they are unable to grasp key concepts when reading texts. The purpose of this study was to investigate the effects of automatic scaffolding and measurement of three-layer concept maps on improving university students' writing summaries. The automatic…

  2. Is Handwriting Performance Affected by the Writing Surface? Comparing Preschoolers', Second Graders', and Adults' Writing Performance on a Tablet vs. Paper.

    PubMed

    Gerth, Sabrina; Klassert, Annegret; Dolk, Thomas; Fliesser, Michael; Fischer, Martin H; Nottbusch, Guido; Festman, Julia

    2016-01-01

    Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants-even the experienced writers-were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.

  3. Is Handwriting Performance Affected by the Writing Surface? Comparing Preschoolers', Second Graders', and Adults' Writing Performance on a Tablet vs. Paper

    PubMed Central

    Gerth, Sabrina; Klassert, Annegret; Dolk, Thomas; Fliesser, Michael; Fischer, Martin H.; Nottbusch, Guido; Festman, Julia

    2016-01-01

    Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting. PMID:27672372

  4. Writing executable assertions to test flight software

    NASA Technical Reports Server (NTRS)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.

  5. Early development of language by hand: composing, reading, listening, and speaking connections; three letter-writing modes; and fast mapping in spelling.

    PubMed

    Berninger, Virginia W; Abbott, Robert D; Jones, Janine; Wolf, Beverly J; Gould, Laura; Anderson-Youngstrom, Marci; Shimada, Shirley; Apel, Kenn

    2006-01-01

    The first findings from a 5-year, overlapping-cohorts longitudinal study of typical language development are reported for (a) the interrelationships among Language by Ear (listening), Mouth (speaking), Eye (reading), and Hand (writing) in Cohort 1 in 1st and 3rd grade and Cohort 2 in 3rd and 5th grade; (b) the interrelationships among three modes of Language by Hand (writing manuscript letters with pen and keyboard and cursive letters with pen) in each cohort in the same grade levels as (a); and (c) the ability of the 1st graders in Cohort 1 and the 3rd graders in Cohort 2 to apply fast mapping in learning to spell pseudowords. Results showed that individual differences in Listening Comprehension, Oral Expression, Reading Comprehension, and Written Expression are stable developmentally, but each functional language system is only moderately correlated with the others. Likewise, manuscript writing, cursive writing, and keyboarding are only moderately correlated, and each has a different set of unique neuropsychological predictors depending on outcome measure and grade level. Results support the use of the following neuropsychological measures in assessing handwriting modes: orthographic coding, rapid automatic naming, finger succession (grapho-motor planning for sequential finger movements), inhibition, inhibition/switching, and phonemes skills (which may facilitate transfer of abstract letter identities across letter formats and modes of production). Both 1st and 3rd graders showed evidence of fast mapping of novel spoken word forms onto written word forms over 3 brief sessions (2 of which involved teaching) embedded in the assessment battery; and this fast mapping explained unique variance in their spelling achievement over and beyond their orthographic and phonological coding abilities and correlated significantly with current and next-year spelling achievement.

  6. Kindergarten Predictors of Third Grade Writing

    PubMed Central

    Kim, Young-Suk; Al Otaiba, Stephanie; Wanzek, Jeanne

    2015-01-01

    The primary goal of the present study was to examine the relations of kindergarten transcription, oral language, word reading, and attention skills to writing skills in third grade. Children (N = 157) were assessed on their letter writing automaticity, spelling, oral language, word reading, and attention in kindergarten. Then, they were assessed on writing in third grade using three writing tasks – one narrative and two expository prompts. Children’s written compositions were evaluated in terms of writing quality (the extent to which ideas were developed and presented in an organized manner). Structural equation modeling showed that kindergarten oral language and lexical literacy skills (i.e., word reading and spelling) were independently predicted third grade narrative writing quality, and kindergarten literacy skill uniquely predicted third grade expository writing quality. In contrast, attention and letter writing automaticity were not directly related to writing quality in either narrative or expository genre. These results are discussed in light of theoretical and practical implications. PMID:25642118

  7. Towards an understanding of dimensions, predictors, and gender gap in written composition

    PubMed Central

    Kim, Young-Suk; Al Otaiba, Stephanie; Wanzek, Jeanne; Gatlin, Brandy

    2014-01-01

    We had three aims in the present study: (1) to examine the dimensionality of various evaluative approaches to scoring writing samples (e.g., quality, productivity, and curriculum based writing [CBM]) , (2) to investigate unique language and cognitive predictors of the identified dimensions, and (3) to examine gender gap in the identified dimensions of writing. These questions were addressed using data from second and third grade students (N = 494). Data were analyzed using confirmatory factor analysis and multilevel modeling. Results showed that writing quality, productivity, and CBM scoring were dissociable constructs, but that writing quality and CBM scoring were highly related (r = .82). Language and cognitive predictors differed among the writing outcomes. Boys had lower writing scores than girls even after accounting for language, reading, attention, spelling, handwriting automaticity, and rapid automatized naming. Results are discussed in light of writing evaluation and a developmental model of writing. PMID:25937667

  8. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  9. Automatic Detection of Preposition Errors in Learner Writing

    ERIC Educational Resources Information Center

    De Felice, Rachele; Pulman, Stephen

    2009-01-01

    In this article, we present an approach to the automatic correction of preposition errors in L2 English. Our system, based on a maximum entropy classifier, achieves average precision of 42% and recall of 35% on this task. The discussion of results obtained on correct and incorrect data aims to establish what characteristics of L2 writing prove…

  10. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  11. Theorization and an Empirical Investigation of the Component-Based and Developmental Text Writing Fluency Construct

    ERIC Educational Resources Information Center

    Kim, Young-Suk Grace; Gatlin, Brandy; Al Otaiba, Stephanie; Wanzek, Jeanne

    2018-01-01

    We discuss a component-based, developmental view of text writing fluency, which we tested using data from children in Grades 2 and 3. "Text writing fluency" was defined as efficiency and automaticity in writing connected texts, which acts as a mediator between text generation (oral language), transcription skills, and writing quality. We…

  12. Want to Improve Undergraduate Thesis Writing? Engage Students and Their Faculty Readers in Scientific Peer Review

    ERIC Educational Resources Information Center

    Reynolds, Julie A.; Thompson, Robert J., Jr.

    2011-01-01

    One of the best opportunities that undergraduates have to learn to write like a scientist is to write a thesis after participating in faculty-mentored undergraduate research. But developing writing skills doesn't happen automatically, and there are significant challenges associated with offering writing courses and with individualized mentoring.…

  13. Write to Read: Investigating the Reading-Writing Relationship of Code-Level Early Literacy Skills

    ERIC Educational Resources Information Center

    Jones, Cindy D.; Reutzel, D. Ray

    2015-01-01

    The purpose of this study was to examine whether the code-related features used in current methods of writing instruction in kindergarten classrooms transfer reading outcomes for kindergarten students. We randomly assigned kindergarten students to 3 instructional groups: a writing workshop group, an interactive writing group, and a control group.…

  14. A Developmental Writing Scale. Research Report. ETS RR-08-19

    ERIC Educational Resources Information Center

    Attali, Yigal; Powers, Don

    2008-01-01

    This report describes the development of grade norms for timed-writing performance in two modes of writing: persuasive and descriptive. These norms are based on objective and automatically computed measures of writing quality in grammar, usage, mechanics, style, vocabulary, organization, and development. These measures are also used in the…

  15. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  16. How Color Coding Formulaic Writing Enhances Organization: A Qualitative Approach for Measuring Student Affect

    ERIC Educational Resources Information Center

    Geigle, Bryce A.

    2014-01-01

    The aim of this thesis is to investigate and present the status of student synthesis with color coded formula writing for grade level six through twelve, and to make recommendations for educators to teach writing structure through a color coded formula system in order to increase classroom engagement and lower students' affect. The thesis first…

  17. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  18. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.

  19. Premotor activations in response to visually presented single letters depend on the hand used to write: a study on left-handers.

    PubMed

    Longcamp, Marieke; Anton, Jean-Luc; Roth, Muriel; Velay, Jean-Luc

    2005-01-01

    In a previous fMRI study on right-handers (Rhrs), we reported that part of the left ventral premotor cortex (BA6) was activated when alphabetical characters were passively observed and that the same region was also involved in handwriting [Longcamp, M., Anton, J. L., Roth, M., & Velay, J. L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19, 1492-1500]. We therefore suggested that letter-viewing may induce automatic involvement of handwriting movements. In the present study, in order to confirm this hypothesis, we carried out a similar fMRI experiment on a group of left-handed subjects (Lhrs). We reasoned that if the above assumption was correct, visual perception of letters by Lhrs might automatically activate cortical motor areas coding for left-handed writing movements, i.e., areas located in the right hemisphere. The visual stimuli used here were either single letters, single pseudoletters, or a control stimulus. The subjects were asked to watch these stimuli attentively, and no response was required. The results showed that a ventral premotor cortical area (BA6) in the right hemisphere was specifically activated when Lhrs looked at letters and not at pseudoletters. This right area was symmetrically located with respect to the left one activated under the same circumstances in Rhrs. This finding supports the hypothesis that visual perception of written language evokes covert motor processes. In addition, a bilateral area, also located in the premotor cortex (BA6), but more ventrally and medially, was found to be activated in response to both letters and pseudoletters. This premotor region, which was not activated correspondingly in Rhrs, might be involved in the processing of graphic stimuli, whatever their degree of familiarity.

  20. Examining Alphabet Writing Fluency in Kindergarten: Exploring the Issue of Time on Task

    ERIC Educational Resources Information Center

    Puranik, Cynthia S.; Patchan, Melissa M.; Sears, Mary M.; McMaster, Kristen L.

    2017-01-01

    Curriculum-based measures (CBMs) are necessary for educators to quickly assess student skill levels and monitor progress. This study examined the use of the alphabet writing fluency task, a CBM of writing, to assess handwriting fluency--that is, how well children access, retrieve, and write letter forms automatically. In the current study, the…

  1. Other People's Students Elaborated Codes and Dialect in Basic Writing

    ERIC Educational Resources Information Center

    Evans, Jason Cory

    2012-01-01

    English teachers, especially those in the field of basic writing, have long debated how to teach writing to students whose home language differs from the perceived norm. This thesis intervenes in that stalemated debate by re-examining "elaborated codes" and by arguing for a type of correctness in writing that includes being correct…

  2. PYTHON for Variable Star Astronomy (Abstract)

    NASA Astrophysics Data System (ADS)

    Craig, M.

    2018-06-01

    (Abstract only) Open source PYTHON packages that are useful for data reduction, photometry, and other tasks relevant to variable star astronomy have been developed over the last three to four years as part of the Astropy project. Using this software, it is relatively straightforward to reduce images, automatically detect sources, and match them to catalogs. Over the last year browser-based tools for performing some of those tasks have been developed that minimize or eliminate the need to write any of your own code. After providing an overview of the current state of the software, an application that calculates transformation coefficients on a frame-by-frame basis by matching stars in an image to the APASS catalog will be described.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boman, Erik G.

    This LDRD project was a campus exec fellowship to fund (in part) Donald Nguyen’s PhD research at UT-Austin. His work has focused on parallel programming models, and scheduling irregular algorithms on shared-memory systems using the Galois framework. Galois provides a simple but powerful way for users and applications to automatically obtain good parallel performance using certain supported data containers. The naïve user can write serial code, while advanced users can optimize performance by advanced features, such as specifying the scheduling policy. Galois was used to parallelize two sparse matrix reordering schemes: RCM and Sloan. Such reordering is important in high-performancemore » computing to obtain better data locality and thus reduce run times.« less

  4. The Impact of Promoting Transcription on Early Text Production: Effects on Bursts and Pauses, Levels of Written Language, and Writing Performance

    ERIC Educational Resources Information Center

    Alves, Rui A.; Limpo, Teresa; Fidalgo, Raquel; Carvalhais, Lénia; Pereira, Luísa Álvares; Castro, São Luís

    2016-01-01

    Writing development seems heavily dependent upon the automatization of transcription. This study aimed to further investigate the link between transcription and writing by examining the effects of promoting handwriting and spelling skills on a comprehensive set of writing measures (viz., bursts and pauses, levels of written language, and writing…

  5. Classroom Tech

    ERIC Educational Resources Information Center

    Instructor, 2006

    2006-01-01

    This article features the latest classroom technologies namely the FLY Pentop, WriteToLearn, and a new iris scan identification system. The FLY Pentop is a computerized pen from Leapster that "magically" understands what kids write and draw on special FLY paper. WriteToLearn is an automatic grading software from Pearson Knowledge Technologies and…

  6. Knowledge base methodology: Methodology for first Engineering Script Language (ESL) knowledge base

    NASA Technical Reports Server (NTRS)

    Peeris, Kumar; Izygon, Michel E.

    1992-01-01

    The primary goal of reusing software components is that software can be developed faster, cheaper and with higher quality. Though, reuse is not automatic and can not just happen. It has to be carefully engineered. For example a component needs to be easily understandable in order to be reused, and it has also to be malleable enough to fit into different applications. In fact the software development process is deeply affected when reuse is being applied. During component development, a serious effort has to be directed toward making these components as reusable. This implies defining reuse coding style guidelines and applying then to any new component to create as well as to any old component to modify. These guidelines should point out the favorable reuse features and may apply to naming conventions, module size and cohesion, internal documentation, etc. During application development, effort is shifted from writing new code toward finding and eventually modifying existing pieces of code, then assembling them together. We see here that reuse is not free, and therefore has to be carefully managed.

  7. MetaJC++: A flexible and automatic program transformation technique using meta framework

    NASA Astrophysics Data System (ADS)

    Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.

    2014-09-01

    Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.

  8. Cryptology

    ERIC Educational Resources Information Center

    Tech Directions, 2011

    2011-01-01

    Cryptology, or cryptography, is the study of writing and deciphering hidden messages in codes, ciphers, and writings. It is almost as old as writing itself. Ciphers are messages in which letters are rearranged or substituted for other letters or numbers. Codes are messages in which letters are replaced by letter groups, syllables, or sentences.…

  9. Using Writing Process and Product Features to Assess Writing Quality and Explore How Those Features Relate to Other Literacy Tasks. Research Report. ETS RR-14-03

    ERIC Educational Resources Information Center

    Deane, Paul

    2014-01-01

    This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…

  10. Validity of Scores for a Developmental Writing Scale Based on Automated Scoring

    ERIC Educational Resources Information Center

    Attali, Yigal; Powers, Donald

    2009-01-01

    A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade,…

  11. Coding for Language Complexity: The Interplay among Methodological Commitments, Tools, and Workflow in Writing Research

    ERIC Educational Resources Information Center

    Geisler, Cheryl

    2018-01-01

    Coding, the analytic task of assigning codes to nonnumeric data, is foundational to writing research. A rich discussion of methodological pluralism has established the foundational importance of systematicity in the task of coding, but less attention has been paid to the equally important commitment to language complexity. Addressing the interplay…

  12. Mentor Texts and the Coding of Academic Writing Structures: A Functional Approach

    ERIC Educational Resources Information Center

    Escobar Alméciga, Wilder Yesid; Evans, Reid

    2014-01-01

    The purpose of the present pedagogical experience was to address the English language writing needs of university-level students pursuing a degree in bilingual education with an emphasis in the teaching of English. Using mentor texts and coding academic writing structures, an instructional design was developed to directly address the shortcomings…

  13. Automatic Certification of Kalman Filters for Reliable Code Generation

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian

    2005-01-01

    AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.

  14. The Sentence Fairy: A Natural-Language Generation System to Support Children's Essay Writing

    ERIC Educational Resources Information Center

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2008-01-01

    We built an NLP system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary texts produced by pupils…

  15. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  16. Proceduracy: Computer Code Writing in the Continuum of Literacy

    ERIC Educational Resources Information Center

    Vee, Annette

    2010-01-01

    This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…

  17. The effects of automatic spelling correction software on understanding and comprehension in compensated dyslexia: improved recall following dictation.

    PubMed

    Hiscox, Lucy; Leonavičiūtė, Erika; Humby, Trevor

    2014-08-01

    Dyslexia is associated with difficulties in language-specific skills such as spelling, writing and reading; the difficulty in acquiring literacy skills is not a result of low intelligence or the absence of learning opportunity, but these issues will persist throughout life and could affect long-term education. Writing is a complex process involving many different functions, integrated by the working memory system; people with dyslexia have a working memory deficit, which means that concentration on writing quality may be detrimental to understanding. We confirm impaired working memory in a sample of university students with (compensated) dyslexia, and using a within-subject design with three test conditions, we show that these participants demonstrated better understanding of a piece of text if they had used automatic spelling correction software during a dictation/transcription task. We hypothesize that the use of the autocorrecting software reduced demand on working memory, by allowing word writing to be more automatic, thus enabling better processing and understanding of the content of the transcriptions and improved recall. Long-term and regular use of autocorrecting assistive software should be beneficial for people with and without dyslexia and may improve confidence, written work, academic achievement and self-esteem, which are all affected in dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  18. 76 FR 68488 - Extension of the Designation of Honduras for Temporary Protected Status and Automatic Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-04

    ... 2008 Economist Intelligence Unit report, transportation infrastructure was ``patchy but improving... your A-number printed on it); and c. Write the automatic extension date in the second space. (2) For...

  19. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  20. Relearning of Writing Skills in Parkinson's Disease After Intensive Amplitude Training.

    PubMed

    Nackaerts, Evelien; Heremans, Elke; Vervoort, Griet; Smits-Engelsman, Bouwien C M; Swinnen, Stephan P; Vandenberghe, Wim; Bergmans, Bruno; Nieuwboer, Alice

    2016-08-01

    Micrographia occurs in approximately 60% of people with Parkinson's disease (PD). Although handwriting is an important task in daily life, it is not clear whether relearning and consolidation (ie the solid storage in motor memory) of this skill is possible in PD. The objective was to conduct for the first time a controlled study into the effects of intensive motor learning to improve micrographia in PD. In this placebo-controlled study, 38 right-handed people with PD were randomized into 2 groups, receiving 1 of 2 equally time-intensive training programs (30 min/day, 5 days/week for 6 weeks). The experimental group (n = 18) performed amplitude training focused at improving writing size. The placebo group (n = 20) received stretch and relaxation exercises. Participants' writing skills were assessed using a touch-sensitive writing tablet and a pen-and-paper test, pre- and posttraining, and after a 6-week retention period. The primary outcome was change in amplitude during several tests of consolidation: (1) transfer, using trained and untrained sequences performed with and without target zones; and (2) automatization, using single- and dual-task sequences. The group receiving amplitude training significantly improved in amplitude and variability of amplitude on the transfer and automatization task. Effect sizes varied between 7% and 17%, and these benefits were maintained after the 6-week retention period. Moreover, there was transfer to daily life writing. These results show automatization, transfer, and retention of increased writing size (diminished micrographia) after intensive amplitude training, indicating that consolidation of motor learning is possible in PD. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  1. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    ERIC Educational Resources Information Center

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  2. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  3. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  4. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  5. C2M: Configurable Chemical Middleware

    PubMed Central

    Roosendaal, Hans E.; Geurts, Peter A. T. M.

    2001-01-01

    One of the vexing problems that besets concurrent use of multiple, heterogeneous resources is format multiplicity. C2M aims to equip scientists with a wrapper generator on their desktop. The wrapper generator can build wrappers, or converters that can convert data from or into different formats, from a high-level description of the formats. The language in which such a high-level description is expressed is easy enough for scientists to be able to write format descriptions at minimal cost. In C2M, wrappers and documentation for human reading are automatically obtained from the same user-supplied specifications. Initial experiments demonstrate that the idea can, indeed, lead to the advent of usergoverned wrapper generators. Future research will consolidate the code and extend the approach to a realistic variety of formats. PMID:18628869

  6. An examination of writing pauses in the handwriting of children with developmental coordination disorder.

    PubMed

    Prunty, Mellissa M; Barnett, Anna L; Wilmut, Kate; Plumb, Mandy S

    2014-11-01

    Difficulties with handwriting are reported as one of the main reasons for the referral of children with Developmental Coordination Disorder (DCD) to healthcare professionals. In a recent study we found that children with DCD produced less text than their typically developing (TD) peers and paused for 60% of a free-writing task. However, little is known about the nature of the pausing; whether they are long pauses possibly due to higher level processes of text generation or fatigue, or shorter pauses related to the movements between letters. This gap in the knowledge-base creates barriers to understanding the handwriting difficulties in children with DCD. The aim of this study was to characterise the pauses observed in the handwriting of English children with and without DCD. Twenty-eight 8-14 year-old children with a diagnosis of DCD participated in the study, with 28 TD age and gender matched controls. Participants completed the 10 min free-writing task from the Detailed Assessment of Speed of Handwriting (DASH) on a digitising writing tablet. The total overall percentage of pausing during the task was categorised into four pause time-frames, each derived from the literature on writing (250 ms to 2 s; 2-4 s; 4-10 s and >10 s). In addition, the location of the pauses was coded (within word/between word) to examine where the breakdown in the writing process occurred. The results indicated that the main group difference was driven by more pauses above 10 s in the DCD group. In addition, the DCD group paused more within words compared to TD peers, indicating a lack of automaticity in their handwriting. These findings may support the provision of additional time for children with DCD in written examinations. More importantly, they emphasise the need for intervention in children with DCD to promote the acquisition of efficient handwriting skill. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Guided Writing Lessons: Second-Grade Students' Development of Strategic Behavior

    ERIC Educational Resources Information Center

    Gibson, Sharan A.

    2008-01-01

    This study describes intra-individual change in strategic behavior of five second-grade students during three months of guided writing instruction for informational text. Data sources included sequential coding of writing behavior from videotaped writing events and analytic assessment of writing products. Students' development of self-scaffolding…

  8. Second Language Writing Classification System Based on Word-Alignment Distribution

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko

    2010-01-01

    The present paper introduces an automatic classification system for assisting second language (L2) writing evaluation. This system, which classifies sentences written by L2 learners as either native speaker-like or learner-like sentences, is constructed by machine learning algorithms using word-alignment distributions as classification features…

  9. Automatic Summary Assessment for Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh

    2009-01-01

    Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…

  10. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  11. Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  12. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  13. Evidence-Based Reading and Writing Assessment for Dyslexia in Adolescents and Young Adults

    PubMed Central

    Nielsen, Kathleen; Abbott, Robert; Griffin, Whitney; Lott, Joe; Raskind, Wendy; Berninger, Virginia W.

    2016-01-01

    The same working memory and reading and writing achievement phenotypes (behavioral markers of genetic variants) validated in prior research with younger children and older adults in a multi-generational family genetics study of dyslexia were used to study 81 adolescent and young adults (ages 16 to 25) from that study. Dyslexia is impaired word reading and spelling skills below the population mean and ability to use oral language to express thinking. These working memory predictor measures were given and used to predict reading and writing achievement: Coding (storing and processing) heard and spoken words (phonological coding), read and written words (orthographic coding), base words and affixes (morphological coding), and accumulating words over time (syntax coding); Cross-Code Integration (phonological loop for linking phonological name and orthographic letter codes and orthographic loop for linking orthographic letter codes and finger sequencing codes), and Supervisory Attention (focused and switching attention and self-monitoring during written word finding). Multiple regressions showed that most predictors explained individual difference in at least one reading or writing outcome, but which predictors explained unique variance beyond shared variance depended on outcome. ANOVAs confirmed that research-supported criteria for dyslexia validated for younger children and their parents could be used to diagnose which adolescents and young adults did (n=31) or did not (n=50) meet research criteria for dyslexia. Findings are discussed in reference to the heterogeneity of phenotypes (behavioral markers of genetic variables) and their application to assessment for accommodations and ongoing instruction for adolescents and young adults with dyslexia. PMID:26855554

  14. Graphonomics, Automaticity and Handwriting Assessment

    ERIC Educational Resources Information Center

    Tucha, Oliver; Tucha, Lara; Lange, Klaus W.

    2008-01-01

    A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate…

  15. Writing with Parents in Response to Picture Book Read Alouds

    ERIC Educational Resources Information Center

    DeFauw, Danielle L.

    2017-01-01

    High-quality writing instruction needs to permeate elementary students' in- and outside-of-school experiences. The aim of this research was to explore how teaching writing to parents may support home-school literacy connections. This qualitative case study explored parents' experiences in interactive writing sessions. The descriptive coding and…

  16. On Writing and Handwriting

    ERIC Educational Resources Information Center

    Kucera, Miloš

    2010-01-01

    Writing is often considered secondary to the spoken language, as it is only coded sound-by-sound. But other scholars have demonstrated that writing is similar to "arithmetic": a cognitive structuring, a shift to the meta-level ("for the eye"). "Handwriting" (referred to here as the cursive writing in the sense of…

  17. How to Write: A Barely Annotated Bibliography. Research Report.

    ERIC Educational Resources Information Center

    Miller, Lance A.

    The references in this bibliography tend toward practical or "how to" strategies for writing. The 718 references are listed alphabetically in the bibliography section, with each citation followed by a code denoting its topical categories: (1) general "how to write," (2) "how to write" business letters, (3) stylistics,…

  18. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  19. qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.-B.

    2008-10-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  20. qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model

    NASA Astrophysics Data System (ADS)

    Lin, J. W.-B.

    2009-02-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  1. A Python Implementation of an Intermediate-Level Tropical Circulation Model and Implications for How Modeling Science is Done

    NASA Astrophysics Data System (ADS)

    Lin, J. W. B.

    2015-12-01

    Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiledlanguages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionalityavailable with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Pythonto create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran, to optimize model performance but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.

  2. Neo: an object model for handling electrophysiology data in multiple formats

    PubMed Central

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386

  3. Neo: an object model for handling electrophysiology data in multiple formats.

    PubMed

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.

  4. Writing analytic element programs in Python.

    PubMed

    Bakker, Mark; Kelson, Victor A

    2009-01-01

    The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  5. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2001-01-01

    This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.

  6. Automatic Coding of Dialogue Acts in Collaboration Protocols

    ERIC Educational Resources Information Center

    Erkens, Gijsbert; Janssen, Jeroen

    2008-01-01

    Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…

  7. Automatic Generation and Ranking of Questions for Critical Review

    ERIC Educational Resources Information Center

    Liu, Ming; Calvo, Rafael A.; Rus, Vasile

    2014-01-01

    Critical review skill is one important aspect of academic writing. Generic trigger questions have been widely used to support this activity. When students have a concrete topic in mind, trigger questions are less effective if they are too general. This article presents a learning-to-rank based system which automatically generates specific trigger…

  8. Automated Essay Feedback Generation and Its Impact on Revision

    ERIC Educational Resources Information Center

    Liu, Ming; Li, Yi; Xu, Weiwei; Liu, Li

    2017-01-01

    Writing an essay is a very important skill for students to master, but a difficult task for them to overcome. It is particularly true for English as Second Language (ESL) students in China. It would be very useful if students could receive timely and effective feedback about their writing. Automatic essay feedback generation is a challenging task,…

  9. Down the Rabbit Hole: Challenges and Methodological Recommendations in Researching Writing-Related Student Dispositions

    ERIC Educational Resources Information Center

    Driscoll, Dana Lynn; Gorzelsky, Gwen; Wells, Jennifer; Hayes, Carol; Jones, Ed; Salchak, Steve

    2017-01-01

    Researching writing-related dispositions is of critical concern for understanding writing transfer and writing development. However, as a field we need better tools and methods for identifying, tracking, and analyzing dispositions. This article describes a failed attempt to code for five key dispositions (attribution, self-efficacy, persistence,…

  10. ASA24 enables multiple automatically coded self-administered 24-hour recalls and food records

    Cancer.gov

    A freely available web-based tool for epidemiologic, interventional, behavioral, or clinical research from NCI that enables multiple automatically coded self-administered 24-hour recalls and food records.

  11. Ways with Data: Understanding Coding as Writing

    ERIC Educational Resources Information Center

    Lindgren, Chris

    2017-01-01

    In this dissertation, I report findings from an exploratory case-study about Ray, a web developer, who works on a data-driven news team that finds and tells compelling stories with large sets of data. I implicate this case of Ray's coding on a data team in a writing studies epistemology, which is guided by the following question: "What might…

  12. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  13. Using CASE tools to write engineering specifications

    NASA Astrophysics Data System (ADS)

    Henry, James E.; Howard, Robert W.; Iveland, Scott T.

    1993-08-01

    There are always a wide variety of obstacles to writing and maintaining engineering documentation. To combat these problems, documentation generation can be linked to the process of engineering development. The same graphics and communication tools used for structured system analysis and design (SSA/SSD) also form the basis for the documentation. The goal is to build a living document, such that as an engineering design changes, the documentation will `automatically' revise. `Automatic' is qualified by the need to maintain textual descriptions associated with the SSA/SSD graphics, and the need to generate new documents. This paper describes a methodology and a computer aided system engineering toolset that enables a relatively seamless transition into document generation for the development engineering team.

  14. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  15. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  16. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  17. Incorporating Learning Characteristics into Automatic Essay Scoring Models: What Individual Differences and Linguistic Features Tell Us about Writing Quality

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…

  18. Collaborating with the Disability Rights Community: Co-Writing a Code of Ethics as a Vehicle for Ethics Education

    ERIC Educational Resources Information Center

    Tarvydas, Vilia; Hartley, Michael; Jang, Yoo Jin; Johnston, Sara; Moore-Grant, Nykeisha; Walker, Quiteya; O'Hanlon, Chris; Whalen, James

    2012-01-01

    An ethics project is described that challenged students to collaborate with disability rights authorities to co-write a code of ethics for a Center of Independent Living. Experiential and reflective assignments analyzed how the construction of knowledge and language is never value-neutral, and people with disabilities need to have a voice in…

  19. A review of automatic patient identification options for public health care centers with restricted budgets.

    PubMed

    García-Betances, Rebeca I; Huerta, Mónica K

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.

  20. A Review of Automatic Patient Identification Options for Public Health Care Centers with Restricted Budgets

    PubMed Central

    García-Betances, Rebeca I.; Huerta, Mónica K.

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629

  1. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  2. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  3. Psychiatric/ psychological forensic report writing.

    PubMed

    Young, Gerald

    Approaches to forensic report writing in psychiatry, psychology, and related mental health disciplines have moved from an organization, content, and stylistic framework to considering ethical and other codes, evidentiary standards, and practice considerations. The first part of the article surveys different approaches to forensic report writing, including that of forensic mental health assessment and psychiatric ethics. The second part deals especially with psychological ethical approaches. The American Psychological Association's Ethical Principles and Code of Conduct (2002) provide one set of principles on which to base forensic report writing. The U.S. Federal Rules of Evidence (2014) and related state rules provide another basis. The American Psychological Association's Specialty Guidelines for Forensic Psychology (2013) provide a third source. Some work has expanded the principles in ethics codes; and, in the third part of this article, these additions are applied to forensic report writing. Other work that could help with the question of forensic report writing concerns the 4 Ds in psychological injury assessments (e.g., conduct oneself with Dignity, avoid the adversary Divide, get the needed reliable Data, Determine interpretations and conclusions judiciously). One overarching ethical principle that is especially applicable in forensic report writing is to be comprehensive, scientific, and impartial. As applied to forensic report writing, the overall principle that applies is that the work process and product should reflect integrity in its ethics, law, and science. Four principles that derive from this meta-principle concern: Competency and Communication; Procedure and Protection; Dignity and Distance; and Data Collection and Determination. The standards or rules associated with each of these principles are reviewed. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Effective Beginning Handwriting Instruction: Multi-modal, Consistent Format for 2 Years, and Linked to Spelling and Composing.

    PubMed

    Wolf, Beverly; Abbott, Robert D; Berninger, Virginia W

    2017-02-01

    In Study 1, the treatment group ( N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group ( N =16 first graders, M = 7 years 1 month, 7 girls) received manuscript handwriting instruction not systematically related to the other literacy activities. ANOVA showed both groups improved on automatic alphabet writing from memory; but ANCOVA with the automatic alphabet writing task as covariate showed that the treatment group improved significantly more than control group from the second to ninth month of first grade on dictated spelling and recognition of word-specific spellings among phonological foils. In Study 2 new groups received either a second year of manuscript ( N = 29, M = 7 years 8 months, 16 girls) or introduction to cursive (joined) instruction in second grade ( N = 24, M = 8 years 0 months, 11 girls) embedded in the Slingerland literacy program. ANCOVA with automatic alphabet writing as covariate showed that those who received a second year of manuscript handwriting instruction improved more on sustained handwriting over 30, 60, and 90 seconds than those who had had only one year of manuscript instruction; both groups improved in spelling and composing from the second to ninth month of second grade. Results are discussed in reference to mastering one handwriting format before introducing another format at a higher grade level and always embedding handwriting instruction in writing and reading instruction aimed at all levels of language.

  5. Effective Beginning Handwriting Instruction: Multi-modal, Consistent Format for 2 Years, and Linked to Spelling and Composing

    PubMed Central

    Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.

    2016-01-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N =16 first graders, M = 7 years 1 month, 7 girls) received manuscript handwriting instruction not systematically related to the other literacy activities. ANOVA showed both groups improved on automatic alphabet writing from memory; but ANCOVA with the automatic alphabet writing task as covariate showed that the treatment group improved significantly more than control group from the second to ninth month of first grade on dictated spelling and recognition of word-specific spellings among phonological foils. In Study 2 new groups received either a second year of manuscript (N = 29, M = 7 years 8 months, 16 girls) or introduction to cursive (joined) instruction in second grade (N = 24, M = 8 years 0 months, 11 girls) embedded in the Slingerland literacy program. ANCOVA with automatic alphabet writing as covariate showed that those who received a second year of manuscript handwriting instruction improved more on sustained handwriting over 30, 60, and 90 seconds than those who had had only one year of manuscript instruction; both groups improved in spelling and composing from the second to ninth month of second grade. Results are discussed in reference to mastering one handwriting format before introducing another format at a higher grade level and always embedding handwriting instruction in writing and reading instruction aimed at all levels of language. PMID:28190930

  6. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  7. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  8. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  9. Software Writing Skills for Your Research - Lessons Learned from Workshops in the Geosciences

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin

    2016-04-01

    Findings presented in scientific papers are based on data and software. Once in a while they come along with data - but not commonly with software. However, the software used to gain findings plays a crucial role in the scientific work. Nevertheless, software is rarely seen publishable. Thus researchers may not reproduce the findings without the software which is in conflict with the principle of reproducibility in sciences. For both, the writing of publishable software and the reproducibility issue, the quality of software is of utmost importance. For many programming scientists the treatment of source code, e.g. with code design, version control, documentation, and testing is associated with additional work that is not covered in the primary research task. This includes the adoption of processes following the software development life cycle. However, the adoption of software engineering rules and best practices has to be recognized and accepted as part of the scientific performance. Most scientists have little incentive to improve code and do not publish code because software engineering habits are rarely practised by researchers or students. Software engineering skills are not passed on to followers as for paper writing skill. Thus it is often felt that the software or code produced is not publishable. The quality of software and its source code has a decisive influence on the quality of research results obtained and their traceability. So establishing best practices from software engineering to serve scientific needs is crucial for the success of scientific software. Even though scientists use existing software and code, i.e., from open source software repositories, only few contribute their code back into the repositories. So writing and opening code for Open Science means that subsequent users are able to run the code, e.g. by the provision of sufficient documentation, sample data sets, tests and comments which in turn can be proven by adequate and qualified reviews. This assumes that scientist learn to write and release code and software as they learn to write and publish papers. Having this in mind, software could be valued and assessed as a contribution to science. But this requires the relevant skills that can be passed to colleagues and followers. Therefore, the GFZ German Research Centre for Geosciences performed three workshops in 2015 to address the passing of software writing skills to young scientists, the next generation of researchers in the Earth, planetary and space sciences. Experiences in running these workshops and the lessons learned will be summarized in this presentation. The workshops have received support and funding by Software Carpentry, a volunteer organization whose goal is to make scientists more productive, and their work more reliable, by teaching them basic computing skills, and by FOSTER (Facilitate Open Science Training for European Research), a two-year, EU-Funded (FP7) project, whose goal to produce a European-wide training programme that will help to incorporate Open Access approaches into existing research methodologies and to integrate Open Science principles and practice in the current research workflow by targeting the young researchers and other stakeholders.

  10. Automatic generation of user material subroutines for biomechanical growth analysis.

    PubMed

    Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato

    2010-10-01

    The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.

  11. Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    1982-01-01

    Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)

  12. De-Coding Writing Assignments.

    ERIC Educational Resources Information Center

    Simon, Linda

    1991-01-01

    Argues that understanding assignments is the first step toward successful college writing. Urges instructors to support students by helping them to decode assignments. Breaks down instructions into individual tasks including (1) writing an essay, (2) examining an issue, (3) reviewing articles and books, and (4) focusing on some texts. Defines each…

  13. Teaching with technology: automatically receiving information from the internet and web.

    PubMed

    Wink, Diane M

    2010-01-01

    In this bimonthly series, the author examines how nurse educators can use the Internet and Web-based computer technologies such as search, communication, and collaborative writing tools, social networking and social bookmarking sites, virtual worlds, and Web-based teaching and learning programs. This article presents information and tools related to automatically receiving information from the Internet and Web.

  14. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    NASA Technical Reports Server (NTRS)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  15. Writing Strengthens Orthography and Alphabetic-Coding Strengthens Phonology in Learning to Read Chinese

    ERIC Educational Resources Information Center

    Guan, Connie Qun; Liu, Ying; Chan, Derek Ho Leung; Ye, Feifei; Perfetti, Charles A.

    2011-01-01

    Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form. We report 2 studies that test this hypothesis in…

  16. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  17. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  18. Development of Innovative Design Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y.S.; Park, C.O.

    2004-07-01

    The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which ismore » another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)« less

  19. Writing fluency and quality in kindergarten and first grade: The role of attention, reading, transcription, and oral language

    PubMed Central

    Kent, Shawn; Wanzek, Jeanne; Petscher, Yaacov; Al Otaiba, Stephanie; Kim, Young-Suk

    2013-01-01

    In the present study, we examined the influence of kindergarten component skills on writing outcomes, both concurrently and longitudinally to first grade. Using data from 265 students, we investigated a model of writing development including attention regulation along with students’ reading, spelling, handwriting fluency, and oral language component skills. Results from structural equation modeling demonstrated that a model including attention was better fitting than a model with only language and literacy factors. Attention, a higher-order literacy factor related to reading and spelling proficiency, and automaticity in letter-writing were uniquely and positively related to compositional fluency in kindergarten. Attention and higher-order literacy factor were predictive of both composition quality and fluency in first grade, while oral language showed unique relations with first grade writing quality. Implications for writing development and instruction are discussed. PMID:25132722

  20. Phonological Awareness and Rapid Automatized Naming Predicting Early Development in Reading and Spelling: Results from a Cross-Linguistic Longitudinal Study

    ERIC Educational Resources Information Center

    Furnes, Bjarte; Samuelsson, Stefan

    2011-01-01

    In this study, the relationship between latent constructs of phonological awareness (PA) and rapid automatized naming (RAN) was investigated and related to later measures of reading and spelling in children learning to read in different alphabetic writing systems (i.e., Norwegian/Swedish vs. English). 750 U.S./Australian children and 230…

  1. Operational testing of system for automatic sleep analysis

    NASA Technical Reports Server (NTRS)

    Kellaway, P.

    1972-01-01

    Tables on the performance, under operational conditions, of an automatic sleep monitoring system are presented. Data are recorded from patients who were undergoing heart and great vessel surgery. This study resulted in cap, electrode, and preamplifier improvements. Children were used to test the sleep analyzer and medical console write out units. From these data, an automatic voltage control circuit for the analyzer was developed. A special circuitry for obviating the possibility of incorrect sleep staging due to the presence of a movement artifact was also developed as a result of the study.

  2. Guidelines for development structured FORTRAN programs

    NASA Technical Reports Server (NTRS)

    Earnest, B. M.

    1984-01-01

    Computer programming and coding standards were compiled to serve as guidelines for the uniform writing of FORTRAN 77 programs at NASA Langley. Software development philosophy, documentation, general coding conventions, and specific FORTRAN coding constraints are discussed.

  3. .

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Zhu, Qing

    2017-07-01

    In order to achieve the simulation of elaborate stroke trajectories in Chinese calligraphy, this paper puts forward the innovative researching on writing momentum in the field of non-photorealistic rendering in the first time. Through the analysis of using pen in Chinese calligraphy, the writing momentum is divided into three parts: the center, the side and the back of writing brush by the judgment of the angle of brush holder. We design an algorithm for dynamic outputting writing rendering based on brush model. According to monitoring parameters such as the direction, position and normalized pressure of using pen, we calculate parameters like the footprint direction, the shape, size and nib bending after writing. The algorithm can also judge the dynamic writing trend of stroke trajectories, even automatic generate stroke trajectories by the algorithm forecasted. We achieve a more delicate rendering of Chinese calligraphy to enhance the user's operating results. And we finish the unique writing effect separated the Chinese calligraphy form other general writing results, which greatly enhances the Chinese calligraphy simulation. So that people who lack of writing skills can easily draw a beautiful charm font.

  4. The Utility of Writing Assignments in Undergraduate Bioscience

    ERIC Educational Resources Information Center

    Libarkin, Julie; Ording, Gabriel

    2012-01-01

    We tested the hypothesis that engagement in a few, brief writing assignments in a nonmajors science course can improve student ability to convey critical thought about science. A sample of three papers written by students (n = 30) was coded for presence and accuracy of elements related to scientific writing. Scores for different aspects of…

  5. Spies Like Us: Gamifying the Composition Classroom and Breaking the Academic Code

    ERIC Educational Resources Information Center

    Slentz, Jessica E.; Kondrlik, Kristin E.; Lyons-McFarland, Michelle

    2017-01-01

    English 150: Expository Writing is an undergraduate course in expository and research writing offered by special partnership to both the students of Case Western Reserve University (CWRU) and the students of the Cleveland Institute of Music (CIM). The course provides student writers experience in the critical reading and writing practices required…

  6. Meeting highlights applications of Nek5000 simulation code | Argonne

    Science.gov Websites

    Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Careers Education Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Meeting highlights

  7. Porcupine: A visual pipeline tool for neuroimaging analysis

    PubMed Central

    Snoek, Lukas; Knapen, Tomas

    2018-01-01

    The field of neuroimaging is rapidly adopting a more reproducible approach to data acquisition and analysis. Data structures and formats are being standardised and data analyses are getting more automated. However, as data analysis becomes more complicated, researchers often have to write longer analysis scripts, spanning different tools across multiple programming languages. This makes it more difficult to share or recreate code, reducing the reproducibility of the analysis. We present a tool, Porcupine, that constructs one’s analysis visually and automatically produces analysis code. The graphical representation improves understanding of the performed analysis, while retaining the flexibility of modifying the produced code manually to custom needs. Not only does Porcupine produce the analysis code, it also creates a shareable environment for running the code in the form of a Docker image. Together, this forms a reproducible way of constructing, visualising and sharing one’s analysis. Currently, Porcupine links to Nipype functionalities, which in turn accesses most standard neuroimaging analysis tools. Our goal is to release researchers from the constraints of specific implementation details, thereby freeing them to think about novel and creative ways to solve a given problem. Porcupine improves the overview researchers have of their processing pipelines, and facilitates both the development and communication of their work. This will reduce the threshold at which less expert users can generate reusable pipelines. With Porcupine, we bridge the gap between a conceptual and an implementational level of analysis and make it easier for researchers to create reproducible and shareable science. We provide a wide range of examples and documentation, as well as installer files for all platforms on our website: https://timvanmourik.github.io/Porcupine. Porcupine is free, open source, and released under the GNU General Public License v3.0. PMID:29746461

  8. GoCxx: a tool to easily leverage C++ legacy code for multicore-friendly Go libraries and frameworks

    NASA Astrophysics Data System (ADS)

    Binet, Sébastien

    2012-12-01

    Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a ‘single-thread’ processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. Sure, C++11 will improve on the current situation (by standardizing on std::thread, introducing lambda functions and defining a memory model) but it will do so at the price of complicating further an already quite sophisticated language. This level of sophistication has probably already strongly motivated analysis groups to migrate to CPython, hoping for its current limitations with respect to multicore scalability to be either lifted (Grand Interpreter Lock removal) or for the advent of a new Python VM better tailored for this kind of environment (PyPy, Jython, …) Could HENP migrate to a language with none of the deficiencies of C++ (build time, deployment, low level tools for concurrency) and with the fast turn-around time, simplicity and ease of coding of Python? This paper will try to make the case for Go - a young open source language with built-in facilities to easily express and expose concurrency - being such a language. We introduce GoCxx, a tool leveraging gcc-xml's output to automatize the tedious work of creating Go wrappers for foreign languages, a critical task for any language wishing to leverage legacy and field-tested code. We will conclude with the first results of applying GoCxx to real C++ code.

  9. Software engineering and simulation

    NASA Technical Reports Server (NTRS)

    Zhang, Shou X.; Schroer, Bernard J.; Messimer, Sherri L.; Tseng, Fan T.

    1990-01-01

    This paper summarizes the development of several automatic programming systems for discrete event simulation. Emphasis is given on the model development, or problem definition, and the model writing phases of the modeling life cycle.

  10. A Tutorial on Parallel and Concurrent Programming in Haskell

    NASA Astrophysics Data System (ADS)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  11. Writing in the Secondary-Level Disciplines: A Systematic Review of Context, Cognition, and Content

    ERIC Educational Resources Information Center

    Miller, Diane M.; Scott, Chyllis E.; McTigue, Erin M.

    2018-01-01

    Situated within the historical and current state of writing and adolescent literacy research, this systematic literature review screened 3504 articles to determine the prevalent themes in current research on writing tasks in content-area classrooms. Each of the 3504 studies was evaluated and coded using seven methodological quality indicators. The…

  12. Common Characteristics of Writing Interventions for Students with Learning Disabilities: A Synthesis of the Literature

    ERIC Educational Resources Information Center

    Kaldenberg, Erica R.; Ganzeveld, Paula; Hosp, John L.; Rodgers, Derek B.

    2016-01-01

    Twenty-three single-subject studies aimed at improving the writing achievement of students identified as having a learning disability were analyzed meta-analytically. The effect size phi was used to compare the writing strategies. The dependent measures used to assess the efficacy of the interventions were also coded and reviewed. Results suggest…

  13. Secret Writing. Keys to the Mysteries of Reading and Writing.

    ERIC Educational Resources Information Center

    Sears, Peter

    With a central theme of how people create a means to communicate reliably, and based on language-making exercises that touch students' imaginations, this book aims to interest students in language and how language is made. Since students like codes and ciphers, the book begins with secret writing, which is then used to reveal the foundation of…

  14. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  15. Benefits of expressive writing in reducing test anxiety: A randomized controlled trial in Chinese samples.

    PubMed

    Shen, Lujun; Yang, Lei; Zhang, Jing; Zhang, Meng

    2018-01-01

    To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts. Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P < 0.001), with the experimental group scoring obviously lower than the control group. The interaction effect of group and gender in the post-test results was non-significant (P > 0.05). Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones. Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.

  16. Benefits of expressive writing in reducing test anxiety: A randomized controlled trial in Chinese samples

    PubMed Central

    Zhang, Jing; Zhang, Meng

    2018-01-01

    Purpose To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. Methods The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants’ writing manuscripts. Results Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P < 0.001), with the experimental group scoring obviously lower than the control group. The interaction effect of group and gender in the post-test results was non-significant (P > 0.05). Students’ writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days’ manuscripts and the last 10 days’ ones. Conclusions Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study. PMID:29401473

  17. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  18. Automatic mathematical modeling for real time simulation program (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1989-01-01

    A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.

  19. A translator writing system for microcomputer high-level languages and assemblers

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Knight, J. C.; Noonan, R. E.

    1980-01-01

    In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.

  20. Diagnosis - Using automatic test equipment and artificial intelligence expert systems

    NASA Astrophysics Data System (ADS)

    Ramsey, J. E., Jr.

    Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).

  1. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  2. Writing Quality in Chinese Children: Speed and Fluency Matter

    PubMed Central

    Yan, Cathy Ming Wai; McBride-Chang, Catherine; Wagner, Richard K.; Zhang, Juan; Wong, Anita M. Y.; Shu, Hua

    2015-01-01

    There were two goals of the present study. The first was to create a scoring scheme by which 9-year-old Chinese children’s writing compositions could be rated to form a total score for writing quality. The second was to examine cognitive correlates of writing quality at age 9 from measures administered at ages 6–9. Age 9 writing compositions were scored using a 7-element rubric; following confirmatory factor analyses, 5 of these elements were retained to represent overall writing quality for subsequent analyses. Measures of vocabulary knowledge, Chinese word dictation, phonological awareness, speed of processing, speeded naming, and handwriting fluency at ages 6–9 were all significantly associated with the obtained overall writing quality measure even when the statistical effect of age was removed. With vocabulary knowledge, dictation skill, age, gender, and phonological awareness included in a regression equation, 35% of the variance in age 9 writing quality was explained. With the variables of speed of processing, speeded naming, and handwriting fluency additionally included as a block, 12% additional variance in the equation was explained. In addition to gender, overall unique correlates of writing quality were dictation, speed of processing, and handwriting fluency, underscoring the importance of both general automaticity and specific writing fluency for writing quality development in children. PMID:25750486

  3. Converting Geometry from Creo Parametric to BRL-CAD

    DTIC Science & Technology

    2017-06-28

    15 Fig. 25 Disable use of CSG in output.............................................................. 15 Fig. 26 Write surface normals...converter. As the code needed to be modernized in any case, some of the simpler requests have also been targeted for implementation: • Writing ...does support them. They will be saved during conversion if this option is enabled. Fig. 26 Write surface normals when outputting triangle meshes

  4. Use of biphase-coded pulses for wideband data storage in time-domain optical memories.

    PubMed

    Shen, X A; Kachru, R

    1993-06-10

    We demonstrate that temporally long laser pulses with appropriate phase modulation can replace either temporally brief or frequency-chirped pulses in a time-domain optical memory to store and retrieve information. A 1.65-µs-long write pulse was biphase modulated according to the 13-bit Barker code for storing multiple bits of optical data into a Pr(3+):YAlO(3) crystal, and the stored information was later recalled faithfully by using a read pulse that was identical to the write pulse. Our results further show that the stored data cannot be retrieved faithfully if mismatched write and read pulses are used. This finding opens up the possibility of designing encrypted optical memories for secure data storage.

  5. Formally specifying the logic of an automatic guidance controller

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1990-01-01

    The following topics are covered in viewgraph form: (1) the Penelope Project; (2) the logic of an experimental automatic guidance control system for a 737; (3) Larch/Ada specification; (4) some failures of informal description; (5) description of mode changes caused by switches; (6) intuitive description of window status (chosen vs. current); (7) design of the code; (8) and specifying the code.

  6. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  7. A Tool for Parameter-space Explorations

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu

    A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.

  8. A case study for the real-time experimental evaluation of the VIPER microprocessor

    NASA Astrophysics Data System (ADS)

    Carreno, Victor A.; Angellatta, Rob K.

    1991-09-01

    An experiment to evaluate the applicability of the Verifiable Integrated Processor for Enhanced Reliability (VIPER) microprocessor to real time control is described. The VIPER microprocessor was invented by the Royal Signals and Radar Establishment (RSRE), U.K., and is an example of the use of formal mathematical methods for developing electronic digital systems with a high degree of assurance on the system design and implementation correctness. The experiment consisted of selecting a control law, writing the control law algorithm for the VIPER processor, and providing real time, dynamic inputs into the processor and monitoring the outputs. The control law selected and coded for the VIPER processor was the yaw damper function of an automatic landing program for a 737 aircraft. The mechanisms for interfacing the VIPER Single Board Computer to the VAX host are described. Results include run time experiences, performance evaluation, and comparison of VIPER and FORTRAN yaw damper algorithm output for accuracy estimation.

  9. An open source Java web application to build self-contained Web GIS sites

    NASA Astrophysics Data System (ADS)

    Zavala Romero, O.; Ahmed, A.; Chassignet, E.; Zavala-Hidalgo, J.

    2014-12-01

    This work describes OWGIS, an open source Java web application that creates Web GIS sites by automatically writing HTML and JavaScript code. OWGIS is configured by XML files that define which layers (geographic datasets) will be displayed on the websites. This project uses several Open Geospatial Consortium standards to request data from typical map servers, such as GeoServer, and is also able to request data from ncWMS servers. The latter allows for the displaying of 4D data stored using the NetCDF file format (widely used for storing environmental model datasets). Some of the features available on the sites built with OWGIS are: multiple languages, animations, vertical profiles and vertical transects, color palettes, color ranges, and the ability to download data. OWGIS main users are scientists, such as oceanographers or climate scientists, who store their data in NetCDF files and want to analyze, visualize, share, or compare their data using a website.

  10. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.

  11. A case study for the real-time experimental evaluation of the VIPER microprocessor

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Angellatta, Rob K.

    1991-01-01

    An experiment to evaluate the applicability of the Verifiable Integrated Processor for Enhanced Reliability (VIPER) microprocessor to real time control is described. The VIPER microprocessor was invented by the Royal Signals and Radar Establishment (RSRE), U.K., and is an example of the use of formal mathematical methods for developing electronic digital systems with a high degree of assurance on the system design and implementation correctness. The experiment consisted of selecting a control law, writing the control law algorithm for the VIPER processor, and providing real time, dynamic inputs into the processor and monitoring the outputs. The control law selected and coded for the VIPER processor was the yaw damper function of an automatic landing program for a 737 aircraft. The mechanisms for interfacing the VIPER Single Board Computer to the VAX host are described. Results include run time experiences, performance evaluation, and comparison of VIPER and FORTRAN yaw damper algorithm output for accuracy estimation.

  12. Technologies for the marking of fishing gear to identify gear components entangled on marine animals and to reduce abandoned, lost or otherwise discarded fishing gear.

    PubMed

    He, Pingguo; Suuronen, Petri

    2018-04-01

    Fishing gears are marked to establish and inform origin, ownership and position. More recently, fishing gears are marked to aid in capacity control, reduce marine litter due to abandoned, lost or otherwise discarded fishing gear (ALDFG) and assist in its recovery, and to combat illegal, unreported and unregulated (IUU) fishing. Traditionally, physical marking, inscription, writing, color, shape, and tags have been used for ownership and capacity purposes. Buoys, lights, flags, and radar reflectors are used for marking of position. More recently, electronic devices have been installed on marker buoys to enable easier relocation of the gear by owner vessels. This paper reviews gear marking technologies with focus on coded wire tags, radio frequency identification tags, Automatic Identification Systems, advanced electronic buoys for pelagic longlines and fish aggregating devices, and re-location technology if the gear becomes lost. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Bidirectional automatic release of reserve for low voltage network made with low capacity PLCs

    NASA Astrophysics Data System (ADS)

    Popa, I.; Popa, G. N.; Diniş, C. M.; Deaconu, S. I.

    2018-01-01

    The article presents the design of a bidirectional automatic release of reserve made on two types low capacity programmable logic controllers: PS-3 from Klöckner-Moeller and Zelio from Schneider. It analyses the electronic timing circuits that can be used for making the bidirectional automatic release of reserve: time-on delay circuit and time-off delay circuit (two types). In the paper are present the sequences code for timing performed on the PS-3 PLC, the logical functions for the bidirectional automatic release of reserve, the classical control electrical diagram (with contacts, relays, and time relays), the electronic control diagram (with logical gates and timing circuits), the code (in IL language) made for the PS-3 PLC, and the code (in FBD language) made for Zelio PLC. A comparative analysis will be carried out on the use of the two types of PLC and will be present the advantages of using PLCs.

  14. Brain mechanisms for loss of awareness of thought and movement

    PubMed Central

    Oakley, David A.; Halligan, Peter W.; Mehta, Mitul A.; Deeley, Quinton

    2017-01-01

    Abstract Loss or reduction of awareness is common in neuropsychiatric disorders and culturally influenced dissociative phenomena but the underlying brain mechanisms are poorly understood. fMRI was combined with suggestions for automatic writing in 18 healthy highly hypnotically suggestible individuals in a within-subjects design to determine whether clinical alterations in awareness of thought and movement can be experimentally modelled and studied independently of illness. Subjective ratings of control, ownership, and awareness of thought and movement, and fMRI data were collected following suggestions for thought insertion and alien control of writing movement, with and without loss of awareness. Subjective ratings confirmed that suggestions were effective. At the neural level, our main findings indicated that loss of awareness for both thought and movement during automatic writing was associated with reduced activation in a predominantly left-sided posterior cortical network including BA 7 (superior parietal lobule and precuneus), and posterior cingulate cortex, involved in self-related processing and awareness of the body in space. Reduced activity in posterior parietal cortices may underlie specific clinical and cultural alterations in awareness of thought and movement. Clinically, these findings may assist development of imaging assessments for loss of awareness of psychological origin, and interventions such as neurofeedback. PMID:28338742

  15. Writing in dyslexia: product and process.

    PubMed

    Morken, Frøydis; Helland, Turid

    2013-08-01

    Research on dyslexia has largely centred on reading. The aim of this study was to assess the writing of 13 children with and 28 without dyslexia at age 11 years. A programme for keystroke logging was used to allow recording of typing activity as the children performed a sentence dictation task. Five sentences were read aloud twice each. The task was to type the sentence as correctly as possible, with no time constraints. The data were analysed from a product (spelling, grammar and semantics) and process (transcription fluency and revisions) perspective, using repeated measures ANOVA and t-tests to investigate group differences. Furthermore, the data were correlated with measures of rapid automatic naming and working memory. Results showed that the group with dyslexia revised their texts as much as the typical group, but they used more time, and the result was poorer. Moreover, rapid automatic naming correlated with transcription fluency, and working memory correlated with the number of semantic errors. This shows that dyslexia is generally not an issue of effort and that cognitive skills that are known to be important for reading also affect writing. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  17. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  18. Automating the generation of finite element dynamical cores with Firedrake

    NASA Astrophysics Data System (ADS)

    Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas

    2017-04-01

    The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.

  19. Incorporating Code-Based Software in an Introductory Statistics Course

    ERIC Educational Resources Information Center

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  20. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  1. Transoptr — A second order beam transport design code with optimization and constraints

    NASA Astrophysics Data System (ADS)

    Heighway, E. A.; Hutcheon, R. M.

    1981-08-01

    This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.

  2. 14 CFR 1215.108 - Defining user service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... to NASA Headquarters, Code OX, Space Network Division, Washington, DC 20546. Upon review and... submitted in writing to both NASA Headquarters, Code OX, Space Network Division, and GSFC, Code 501.... Request for services within priority groups shall be negotiated with non-NASA users on a first come, first...

  3. Comparison of blogged and written reflections in two medicine clerkships.

    PubMed

    Fischer, Melissa A; Haley, Heather-Lyn; Saarinen, Carrie L; Chretien, Katherine C

    2011-02-01

    academic medical centres may adopt new learning technologies with little data on their effectiveness or on how they compare with traditional methodologies. We conducted a comparative study of student reflective writings produced using either an electronic (blog) format or a traditional written (essay) format to assess differences in content, depth of reflection and student preference. students in internal medicine clerkships at two US medical schools during the 2008-2009 academic year were quasi-randomly assigned to one of two study arms according to which they were asked to either write a traditional reflective essay and subsequently join in faculty-moderated, small-group discussion (n = 45), or post two writings to a faculty-moderated group blog and provide at least one comment on a peer's posts (n = 50). Examples from a pilot block were used to refine coding methods and determine inter-rater reliability. Writings were coded for theme and level of reflection by two blinded authors; these coding processes reached inter-rater reliabilities of 91% and 80%, respectively. Anonymous pre- and post-clerkship surveys assessed student perceptions and preferences. student writing addressed seven main themes: (i) being humanistic; (ii) professional behaviour; (iii) understanding caregiving relationships; (iv) being a student; (v) clinical learning; (vi) dealing with death and dying, and (vii) the health care system, quality, safety and public health. The distribution of themes was similar across institutions and study arms. The level of reflection did not differ between study arms. Post-clerkship surveys showed that student preferences for blogging or essay writing were predicted by experience, with the majority favouring the method they had used. our study suggests there is no significant difference in themes addressed or levels of reflection achieved when students complete a similar assignment via online blogging or traditional essay writing. Given this, faculty staff should feel comfortable in utilising the blog format for reflective exercises. Faculty members could consider the option of using either format to address different learning styles of students.

  4. The Utility of Writing Assignments in Undergraduate Bioscience

    PubMed Central

    Libarkin, Julie; Ording, Gabriel

    2012-01-01

    We tested the hypothesis that engagement in a few, brief writing assignments in a nonmajors science course can improve student ability to convey critical thought about science. A sample of three papers written by students (n = 30) was coded for presence and accuracy of elements related to scientific writing. Scores for different aspects of scientific writing were significantly correlated, suggesting that students recognized relationships between components of scientific thought. We found that students' ability to write about science topics and state conclusions based on data improved over the course of three writing assignments, while the abilities to state a hypothesis and draw clear connections between human activities and environmental impacts did not improve. Three writing assignments generated significant change in student ability to write scientifically, although our results suggest that three is an insufficient number to generate complete development of scientific writing skills. PMID:22383616

  5. Empirical analysis of knowledge bases to support structured output in the Arden syntax.

    PubMed

    Jenders, Robert A

    2013-01-01

    Structured output has been suggested for the Arden Syntax to facilitate interoperability. Tabulate the components of WRITE statements in a corpus of medical logic modules (MLMs)in order to validate requiring structured output. WRITE statements were tabulated in 258 MLMs from 2 organizations. In a total of 351 WRITE statements, email destinations (226) predominated, and 39 orders and 40 coded output elements also were tabulated. Free-text strings predominated as the message data. Arden WRITE statements contain considerable potentially structured data now included as free text. A future, normative structured WRITE statement must address a variety of data types and destinations.

  6. Braille Instruction and Writing Equipment: Reference Circular 86-3.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This reference circular lists selected braille instructional materials and braille writing equipment and supplies currently available for purchase. A total of eight braille code books, seven instruction manuals for braille transcribing, and 17 instructional manuals for braille reading are listed. Suggestions are presented about braille instruction…

  7. Language, literacy, attentional behaviors, and instructional quality predictors of written composition for first graders

    PubMed Central

    Kim, Young-Suk; Otaiba, Stephanie Al; Sidler, Jessica Folsom; Gruelich, Luana

    2013-01-01

    We had two primary purposes in the present study: (1) to examine unique child-level predictors of written composition which included language skills, literacy skills (e.g., reading and spelling), and attentiveness and (2) to examine whether instructional quality (quality in responsiveness and individualization, and quality in spelling and writing instruction) is uniquely related to written composition for first-grade children (N = 527). Children’s written composition was evaluated on substantive quality (ideas, organization, word choice, and sentence flow) and writing conventions (spelling, mechanics, and handwriting). Results revealed that for the substantive quality of writing, children’s grammatical knowledge, reading comprehension, letter writing automaticity, and attentiveness were uniquely related. Teachers’ responsiveness was also uniquely related to the substantive quality of written composition after accounting for child predictors and other instructional quality variables. For the writing conventions outcome, children’s spelling and attentiveness were uniquely related, but instructional quality was not. These results suggest the importance of paying attention to multiple component skills such as language, literacy, and behavioral factors as well as teachers’ responsiveness for writing development. PMID:24062600

  8. Want to Improve Undergraduate Thesis Writing? Engage Students and Their Faculty Readers in Scientific Peer Review

    PubMed Central

    Reynolds, Julie A.; Thompson, Robert J.

    2011-01-01

    One of the best opportunities that undergraduates have to learn to write like a scientist is to write a thesis after participating in faculty-mentored undergraduate research. But developing writing skills doesn't happen automatically, and there are significant challenges associated with offering writing courses and with individualized mentoring. We present a hybrid model in which students have the structural support of a course plus the personalized benefits of working one-on-one with faculty. To optimize these one-on-one interactions, the course uses BioTAP, the Biology Thesis Assessment Protocol, to structure engagement in scientific peer review. By assessing theses written by students who took this course and comparable students who did not, we found that our approach not only improved student writing but also helped faculty members across the department—not only those teaching the course—to work more effectively and efficiently with student writers. Students who enrolled in this course were more likely to earn highest honors than students who only worked one-on-one with faculty. Further, students in the course scored significantly better on all higher-order writing and critical-thinking skills assessed. PMID:21633069

  9. Persistent left unilateral mirror writing: A neuropsychological case study.

    PubMed

    Angelillo, Valentina G; De Lucia, Natascia; Trojano, Luigi; Grossi, Dario

    2010-09-01

    Mirror writing (MW) is a rare disorder in which a script runs in direction opposite to normal and individual letters are reversed. The disorder generally occurs after left-hemisphere lesions, is transient and is observed on the left hand, whereas usually motor impairments prevent assessment of direction of right handwriting. We describe a left-handed patient with complete left hand mirror writing, still evident 2 years after a hemorrhagic stroke in left nucleo-capsular region. Since the patient could write with his right hand he underwent several writing tasks with either hand, and a thorough assessment to clarify the nature of MW. MW was evident in writing to dictation with left hand only, both in right and left hemispace, but the patient could modify his behavior when a verbal instruction was provided. No mirror errors were found in reading words, in copying geometric figures and in spatial orientation tasks. MW in our patient could be accounted for by a failure in automatic transformation of grapho-motor programs to write with the left hand. A lack of concern (a sort of anosodiaphoria) and a poor cognitive flexibility could contribute to long-term persistence of MW. 2010 Elsevier Inc. All rights reserved.

  10. Want to improve undergraduate thesis writing? Engage students and their faculty readers in scientific peer review.

    PubMed

    Reynolds, Julie A; Thompson, Robert J

    2011-01-01

    One of the best opportunities that undergraduates have to learn to write like a scientist is to write a thesis after participating in faculty-mentored undergraduate research. But developing writing skills doesn't happen automatically, and there are significant challenges associated with offering writing courses and with individualized mentoring. We present a hybrid model in which students have the structural support of a course plus the personalized benefits of working one-on-one with faculty. To optimize these one-on-one interactions, the course uses BioTAP, the Biology Thesis Assessment Protocol, to structure engagement in scientific peer review. By assessing theses written by students who took this course and comparable students who did not, we found that our approach not only improved student writing but also helped faculty members across the department--not only those teaching the course--to work more effectively and efficiently with student writers. Students who enrolled in this course were more likely to earn highest honors than students who only worked one-on-one with faculty. Further, students in the course scored significantly better on all higher-order writing and critical-thinking skills assessed.

  11. Relationships between Translation and Transcription Processes during fMRI Connectivity Scanning and Coded Translation and Transcription in Writing Products after Scanning in Children with and without Transcription Disabilities

    PubMed Central

    Wallis, Peter; Richards, Todd; Boord, Peter; Abbott, Robert; Berninger, Virginia

    2018-01-01

    Students with transcription disabilities (dysgraphia/impaired handwriting, n = 13 or dyslexia/impaired word spelling, n = 16) or without transcription disabilities (controls) completed transcription and translation (idea generating, planning, and creating) writing tasks during fMRI connectivity scanning and compositions after scanning, which were coded for transcription and translation variables. Compositions in both groups showed diversity in genre beyond usual narrative-expository distinction; groups differed in coded transcription but not translation variables. For the control group specific transcription or translation tasks during scanning correlated with corresponding coded transcription or translation skills in composition, but connectivity during scanning was not correlated with coded handwriting during composing in dysgraphia group and connectivity during translating was not correlated with any coded variable during composing in dyslexia group. Results are discussed in reference to the trend in neuroscience to use connectivity from relevant seed points while performing tasks and trends in education to recognize the generativity (creativity) of composing at both the genre and syntax levels. PMID:29600113

  12. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  13. Using Invitational Learning to Address Writing Competence for Middle School Students with Disabilities

    ERIC Educational Resources Information Center

    Ornelles, Cecily; Black, Rhonda S.

    2012-01-01

    This study describes the process of creating an Invitational Learning environment to improve the writing competence of middle school students in two special education classes. Teacher-student interactions were coded according to Purkey and Novak's (1996) Intentionality/Invitation Quadrant with levels corresponding to intentionally disinviting,…

  14. Categorization and Analysis of Explanatory Writing in Mathematics

    ERIC Educational Resources Information Center

    Craig, Tracy S.

    2011-01-01

    The aim of this article is to present a scheme for coding and categorizing students' written explanations of mathematical problem-solving activities. The scheme was used successfully within a study project carried out to determine whether student problem-solving behaviour could be positively affected by writing explanatory strategies to…

  15. A Rather Intelligent Language Teacher.

    ERIC Educational Resources Information Center

    Cerri, Stefano; Breuker, Joost

    1981-01-01

    Characteristics of DART (Didactic Augmented Recursive Transition), an ATN-based system for writing intelligent computer assisted instruction (ICAI) programs that is available on the PLATO system are described. DART allows writing programs in an ATN dialect, compiling them in machine code for the PLATO system, and executing them as if the original…

  16. Longitudinal Relations Between Parental Writing Support and Preschoolers’ Language and Literacy Skills

    PubMed Central

    Bindman, Samantha W.; Hindman, Annemarie H.; Aram, Dorit; Morrison, Frederick J.

    2013-01-01

    Parental writing support was examined over time and in relation to children’s language and literacy skills. Seventy-seven parents and their preschoolers were videotaped writing an invitation together twice during one year. Parental writing support was coded at the level of the letter to document parents’ graphophonemic support (letter–sound correspondence), print support (letter formation), and demand for precision (expectation for correcting writing errors). Parents primarily relied on only a couple print (i.e., parent writing the letter alone) and graphophonemic (i.e., saying the word as a whole, dictating letters as children write) strategies. Graphophonemic and print support in preschool predicted children’s decoding skills, and graphophonemic support also predicted children’s future phonological awareness. Neither type of support predicted children’s vocabulary scores. Demand for precision occurred infrequently and was unrelated to children’s outcomes. Findings demonstrate the importance of parental writing support for augmenting children’s literacy skills. PMID:25045186

  17. Writing to dictation and handwriting performance among Chinese children with dyslexia: relationships with orthographic knowledge and perceptual-motor skills.

    PubMed

    Cheng-Lai, Alice; Li-Tsang, Cecilia W P; Chan, Alan H L; Lo, Amy G W

    2013-10-01

    The purpose of this study was to investigate the relationships between writing to dictation, handwriting, orthographic, and perceptual-motor skills among Chinese children with dyslexia. A cross-sectional design was used. A total of 45 third graders with dyslexia were assessed. Results of stepwise multiple regression models showed that Chinese character naming was the only predictor associated with word dictation (β=.32); handwriting speed was related to deficits in rapid automatic naming (β=-.36) and saccadic efficiency (β=-.29), and visual-motor integration predicted both of the number of characters exceeded grid (β=-.41) and variability of character size (β=-.38). The findings provided support to a multi-stage working memory model of writing for explaining the possible underlying mechanism of writing to dictation and handwriting difficulties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  19. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    NASA Astrophysics Data System (ADS)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  20. An Accessible User Interface for Geoscience and Programming

    NASA Astrophysics Data System (ADS)

    Sevre, E. O.; Lee, S.

    2012-12-01

    The goal of this research is to develop an interface that will simplify user interaction with software for scientists. The motivating factor of the research is to develop tools that assist scientists with limited motor skills with the efficient generation and use of software tools. Reliance on computers and programming is increasing in the world of geology, and it is increasingly important for geologists and geophysicists to have the computational resources to use advanced software and edit programs for their research. I have developed a prototype of a program to help geophysicists write programs using a simple interface that requires only simple single-mouse-clicks to input code. It is my goal to minimize the amount of typing necessary to create simple programs and scripts to increase accessibility for people with disabilities limiting fine motor skills. This interface can be adapted for various programming and scripting languages. Using this interface will simplify development of code for C/C++, Java, and GMT, and can be expanded to support any other text based programming language. The interface is designed around the concept of maximizing the amount of code that can be written using a minimum number of clicks and typing. The screen is split into two sections: a list of click-commands is on the left hand side, and a text area is on the right hand side. When the user clicks on a command on the left hand side the applicable code is automatically inserted at the insertion point in the text area. Currently in the C/C++ interface, there are commands for common code segments that are often used, such as for loops, comments, print statements, and structured code creation. The primary goal is to provide an interface that will work across many devices for developing code. A simple prototype has been developed for the iPad. Due to the limited number of devices that an iOS application can be used with, the code has been re-written in Java to run on a wider range of devices. Currently, the software works in a prototype mode, and it is our goal to further development to create software that can benefit a wide range of people working in geosciences, which will make code development practical and accessible for a wider audience of scientists. By using an interface like this, it reduces potential for errors by reusing known working code.

  1. Status of Metric Conversion A Survey of U.S. Standards Writing Organizations.

    DTIC Science & Technology

    1982-05-01

    Boiler and Pressure Vessel Code . 7...to and consistent with metrication of the ASME Boiler and Pressure Vessel Code . The Electrical Apparatus Service Association is a trade asso- ciation...metrication of TEMA Standards will be compatible to and consistent with metrication of the ASME Boiler and Pressure Vessel Code . TEMA’s metrication

  2. 78 FR 77773 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... the International Code of safety for ships using gases or low flash-point fuels (IGF Code) Members of..., by fax at (202) 372-8283, or in writing at Commandant (CG-OES-1), U.S. Coast Guard Stop 7509, 2703...

  3. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  4. Enhancing Student Writing and Computer Programming with LATEX and MATLAB in Multivariable Calculus

    ERIC Educational Resources Information Center

    Sullivan, Eric; Melvin, Timothy

    2016-01-01

    Written communication and computer programming are foundational components of an undergraduate degree in the mathematical sciences. All lower-division mathematics courses at our institution are paired with computer-based writing, coding, and problem-solving activities. In multivariable calculus we utilize MATLAB and LATEX to have students explore…

  5. Evaluation Checklist for Student Writing in Grades K-3, Ottawa County.

    ERIC Educational Resources Information Center

    Ottawa County Office of Education, OH.

    Developed to assist teachers in Ottawa County, Ohio, in monitoring students' pupil performance objectives (PPOs) in grades K-3, this writing evaluation form is the primary record keeping tool in the Competency Based Education (CBE) Program. The form consists of: (1) the evaluation checklist; (2) the intervention code; and (3) record keeping…

  6. Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA

    PubMed Central

    Djebali, Sarah; Delaplace, Franck; Crollius, Hugues Roest

    2006-01-01

    Background Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism. Results We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts. Conclusion We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement. PMID:16925841

  7. Brain dopamine and kinematics of graphomotor functions.

    PubMed

    Lange, Klaus W; Mecklinger, Lara; Walitza, Susanne; Becker, Georg; Gerlach, Manfred; Naumann, Markus; Tucha, Oliver

    2006-10-01

    Three experiments were performed in an attempt to achieve a better understanding of the effect of dopamine on handwriting. In the first experiment, kinematic aspects of handwriting movements were compared between healthy participants and patients with Parkinson's disease (PD) on their usual dopaminergic treatment and following withdrawal of dopaminergic medication. In the second experiment, the writing performance of healthy participants with a hyperechogenicity of the substantia nigra as detected by transcranial sonography (TCS) was compared with the performance of healthy participants with low echogenicity of the substantia nigra. The third experiment examined the effect of central dopamine reduction on kinematic aspects of handwriting movements in healthy adults using acute phenylalanine and tyrosine depletion (APTD). A digitising tablet was used for the assessment of handwriting movements. Participants were asked to perform a simple writing task. Movement time, distance, velocity, acceleration and measures of fluency of handwriting movements were measured. The kinematic analysis of handwriting movements revealed that alterations of central dopaminergic neurotransmission adversely affect movement execution during handwriting. In comparison to the automatic processing of handwriting movements displayed by control participants, participants with an altered dopaminergic neurotransmission shifted from an automatic to a controlled processing of movement execution. Central dopamine appears to be of particular importance with regard to the automatic execution of well-learned movements.

  8. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  9. Beyond diagnoses: family medicine core themes in student reflective writing.

    PubMed

    Bradner, Melissa K; Crossman, Steven H; Gary, Judy; Vanderbilt, Allison A; VanderWielen, Lynn

    2015-03-01

    We share qualitative study results of third-year medical student writings during their family medicine clerkship utilizing a reflective writing exercise from 2005 and 2013. For this paper, 50 student writings were randomly selected from the 2005 cohort in addition to 50 student writings completed by the 2013 cohort. Deductive thematic analysis utilizing Atlas.ti software was completed utilizing the Future of Family Medicine core attributes of family physicians as the a priori coding template. Student writings actively reflect key attributes of family physicians as described by the Future of Family Medicine Report: a deep understanding of the dynamics of the whole person, a generative impact on patients' lives, a talent for humanizing the health care experience, and a natural command of complexity and multidimensional access to care. We discuss how to lead the writing exercise and provide suggestions for facilitating the discussion to bring out these important aspects of family medicine care.

  10. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  11. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  12. A system for classifying wood-using industries and recording statistics for automatic data processing.

    Treesearch

    E.W. Fobes; R.W. Rowe

    1968-01-01

    A system for classifying wood-using industries and recording pertinent statistics for automatic data processing is described. Forms and coding instructions for recording data of primary processing plants are included.

  13. Phonological Codes Constrain Output of Orthographic Codes via Sublexical and Lexical Routes in Chinese Written Production

    PubMed Central

    Wang, Cheng; Zhang, Qingfang

    2015-01-01

    To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes. PMID:25879662

  14. Setting up Conditions for Negotiation in Science

    ERIC Educational Resources Information Center

    Yoon, Sae Yeol; Bennett, William; Mendez, Claudia Aguirre; Hand, Brian

    2010-01-01

    When using an argument based inquiry approach like the Science Writing Heuristic (SWH) approach, argumentation between peers and with a teacher will provide great opportunities for students to experience negotiation of meaning in relation to science content. However, students do not automatically engage in dialogue and argumentation with…

  15. Evidence-Based Diagnosis and Treatment for Specific Learning Disabilities Involving Impairments in Written and/or Oral Language

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; May, Maggie O'Malley

    2011-01-01

    Programmatic, multidisciplinary research provided converging brain, genetic, and developmental support for evidence-based diagnoses of three specific learning disabilities based on hallmark phenotypes (behavioral expression of underlying genotypes) with treatment relevance: dysgraphia (impaired legible automatic letter writing, orthographic…

  16. Relationships between language input and letter output modes in writing notes and summaries for students in grades 4 to 9 with persisting writing disabilities.

    PubMed

    Thompson, Robert; Tanimoto, Steven; Abbott, Robert; Nielsen, Kathleen; Lyman, Ruby Dawn; Geselowitz, Kira; Habermann, Katrien; Mickail, Terry; Raskind, Marshall; Peverly, Stephen; Nagy, William; Berninger, Virginia

    2017-01-01

    This study in programmatic research on technology-supported instruction first identified, through pretesting using evidence-based criteria, students with persisting specific learning disabilities (SLDs) in written language during middle childhood (grades 4-6) and early adolescence (grades 7-9). Participants then completed computerized writing instruction and posttesting. The 12 computer lessons varied output modes (letter production by stylus alternating with hunt and peck keyboarding versus by pencil with grooves alternating with touch typing on keyboard), input (read or heard source material), and task (notes or summaries). Posttesting and coded notes and summaries showed the effectiveness of computerized writing instruction on both writing tasks for multiple modes of language input and letter production output for improving letter production and related writing skills.

  17. Relationships between Language Input and Letter Output Modes in Writing Notes and Summaries for Students in Grades 4 to 9 with Persisting Writing Disabilities

    PubMed Central

    Thompson, Robert; Tanimoto, Steven; Abbott, Robert; Nielsen, Kathleen; Lyman, Ruby Dawn; Geselowitz, Kira; Habermann, Katrien; Mickail, Terry; Raskind, Marshall; Peverly, Stephen; Nagy, William; Berninger, Virginia

    2017-01-01

    This study in programmatic research on technology-supported instruction first identified, through pretesting using evidence-based criteria, students with persisting specific learning disabilities (SLDs) in written language during middle childhood (grades 4-6) and early adolescence (grades 7-9). Participants then completed computerized writing instruction and posttesting. The 12 computer lessons varied output modes (letter production by stylus alternating with hunt and peck keyboarding versus by pencil with grooves alternating with touch typing on keyboard), input (read or heard source material), and task (notes or summaries). Posttesting and coded notes and summaries showed the effectiveness of computerized writing instruction on both writing tasks for multiple modes of language input and letter production output for improving letter production and related writing skills. PMID:27434553

  18. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  19. Developement of an Optimum Interpolation Analysis Method for the CYBER 205

    NASA Technical Reports Server (NTRS)

    Nestler, M. S.; Woollen, J.; Brin, Y.

    1985-01-01

    A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.

  20. FAMA: An automatic code for stellar parameter and abundance determination

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2013-10-01

    Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38

  1. NGDS Data Archiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-08-01

    This is a Node.js command line utility for scraping XML metadata from CSW and WFS, downloading linkage data from CSW and WFS, pinging hosts and returning status codes, pinging data linkages and returning status codes, writing ping status to CSV files, and uploading data to Amazon S3.

  2. Scanners, optical character readers, Cyrillic alphabet and Russian translations

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    1995-01-01

    The writing of code for capture, in a uniform format, of bit maps of words and characters from scanner PICT files is presented. The coding of Dynamic Pattern Matched for the identification of the characters, words and sentences in preparation for translation is discussed.

  3. 24 CFR 983.155 - Completion of housing.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...

  4. 24 CFR 983.155 - Completion of housing.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...

  5. 24 CFR 983.155 - Completion of housing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...

  6. 24 CFR 983.155 - Completion of housing.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...

  7. 24 CFR 983.155 - Completion of housing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...

  8. Using a simulation assistant in modeling manufacturing systems

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.

  9. The Use of Video Feedback in Teaching Process-Approach EFL Writing

    ERIC Educational Resources Information Center

    Özkul, Sertaç; Ortaçtepe, Deniz

    2017-01-01

    This experimental study investigated the use of video feedback as an alternative to feedback with correction codes at an institution where the latter was commonly used for teaching process-approach English as a foreign language (EFL) writing. Over a 5-week period, the control and the experimental groups were provided with feedback based on…

  10. Individual Differences in the Development of Early Writing Skills: Testing the Unique Contribution of Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Bourke, Lorna; Davies, Simon J.; Sumner, Emma; Green, Carolyn

    2014-01-01

    Visually mediated processes including, exposure to print (e.g. reading) as well as orthographic transcription and coding skills, have been found to contribute to individual differences in literacy development. The current study examined the role of visuospatial working memory (WM) in underpinning this relationship and emergent writing. One hundred…

  11. Writer L1/L2 Status and Asynchronous Online Writing Center Feedback: Consultant Response Patterns

    ERIC Educational Resources Information Center

    Weirick, Joshua; Davis, Tracy; Lawson, Daniel

    2017-01-01

    This case study examines the differences in comments offered by asynchronous online writing center consultants to L1 and L2 speakers and examines the potential disconnects in consultant perceptions of their practice. The researchers collected and coded sample papers and interviewed participants to contextualize data from the quantitative portion…

  12. Differences in Writers' Initial Task Representations. Technical Report No. 35.

    ERIC Educational Resources Information Center

    Carey, Linda; And Others

    An exploratory study investigated how writers represent their task to themselves before beginning to write. Using data from verbal protocols, the initial plans of 12 writers (5 experts and 7 student writers) who were working on an expository writing task were examined. The protocols were coded for types of planning. Independent measures of the…

  13. Making the Case for Disciplinarity in Rhetoric, Composition, and Writing Studies: The Visibility Project

    ERIC Educational Resources Information Center

    Phelps, Louise Wetherbee; Ackerman, John M.

    2010-01-01

    In the Visibility Project, professional organizations have worked to gain recognition for the disciplinarity of writing and rhetoric studies through representation of the field in the information codes and databases of higher education. We report success in two important cases: recognition as an "emerging field" in the National Research Council's…

  14. Color-Coded Graphic Organizers for Teaching Writing to Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Ewoldt, Kathy B.; Morgan, Joseph John

    2017-01-01

    A commonly used method for supporting the writing of students with learning disabilities (LD), graphic organizers have been shown to effectively support instruction for students with LD in a variety of content areas (Dexter & Hughes, 2011). Students with LD often struggle with the process of developing their ideas into organized sentences; the…

  15. The Impact of the Development of Verbal Recoding on Children's Early Writing Skills

    ERIC Educational Resources Information Center

    Adams, Anne-Marie; Simmons, Fiona R.; Willis, Catherine S.; Porter, Sarah

    2013-01-01

    Background: The spontaneous recoding of visual stimuli into a phonological code to aid short-term retention has been associated with progress in learning to read (Palmer, 2000b). Aim: This study examined whether there was a comparable association with the development of writing skills. Sample: One hundred eight children (64 males) in the second…

  16. Beginning at the Beginning: The Alphabet's Origins as the Foundation for Interdisciplinary Writing Instruction.

    ERIC Educational Resources Information Center

    Lipman, Joel

    The origins of written language and the study of the alphabet's evolution from pictographic icon or glyph to phonetic, syllabic code are fundamental to the study of writing. Electronically-generated typographies have reawakened interest in letterforms, alphabets, typefaces, and the physical arrangement of words on the page. Fonts, a word that…

  17. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    NASA Astrophysics Data System (ADS)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006) is a collection of various sample programs using ``SPML''. These sample programs provide the basekit for simple numerical experiments of geophysical fluid dynamics. For example, SPMODEL includes 1-dimensional KdV equation model, 2-dimensional barotropic, shallow water, Boussinesq models, 3-dimensional MHD dynamo models in rotating spherical shells. These models are written in the common style in harmony with SPML functions. ``Deepconv'' (Sugiyama et al., 2010) and ``Dcpam'' are a cloud resolving model and a general circulation model for the purpose of applications to the planetary atmospheres, respectively. ``Deepconv'' includes several physical processes appropriate for simulations of Jupiter and Mars atmospheres, while ``Dcpam'' does for simulations of Earth, Mars, and Venus-like atmospheres. ``Rdoc-f95'' is a automatic generator of reference manuals of Fortran90/95 programs, which is an extension of ruby documentation tool kit ``rdoc''. It analyzes dependency of modules, functions, and subroutines in the multiple program source codes. At the same time, it can list up the namelist variables in the programs.

  18. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  19. Methods for Ensuring High Quality of Coding of Cause of Death. The Mortality Register to Follow Southern Urals Populations Exposed to Radiation.

    PubMed

    Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A

    2015-01-01

    To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70  - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.

  20. Self-government of complex reading and writing brains informed by cingulo-opercular network for adaptive control and working memory components for language learning.

    PubMed

    Richards, Todd L; Abbott, Robert D; Yagle, Kevin; Peterson, Dan; Raskind, Wendy; Berninger, Virginia W

    2017-01-01

    To understand mental self-government of the developing reading and writing brain, correlations of clustering coefficients on fMRI reading or writing tasks with BASC 2 Adaptivity ratings (time 1 only) or working memory components (time 1 before and time 2 after instruction previously shown to improve achievement and change magnitude of fMRI connectivity) were investigated in 39 students in grades 4 to 9 who varied along a continuum of reading and writing skills. A Philips 3T scanner measured connectivity during six leveled fMRI reading tasks (subword-letters and sounds, word-word-specific spellings or affixed words, syntax comprehension-with and without homonym foils or with and without affix foils, and text comprehension) and three fMRI writing tasks-writing next letter in alphabet, adding missing letter in word spelling, and planning for composing. The Brain Connectivity Toolbox generated clustering coefficients based on the cingulo-opercular (CO) network; after controlling for multiple comparisons and movement, significant fMRI connectivity clustering coefficients for CO were identified in 8 brain regions bilaterally (cingulate gyrus, superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus, superior temporal gyrus, insula, cingulum-cingulate gyrus, and cingulum-hippocampus). BASC2 Parent Ratings for Adaptivity were correlated with CO clustering coefficients on three reading tasks (letter-sound, word affix judgments and sentence comprehension) and one writing task (writing next letter in alphabet). Before instruction, each behavioral working memory measure (phonology, orthography, morphology, and syntax coding, phonological and orthographic loops for integrating internal language and output codes, and supervisory focused and switching attention) correlated significantly with at least one CO clustering coefficient. After instruction, the patterning of correlations changed with new correlations emerging. Results show that the reading and writing brain's mental government, supported by both CO Adaptive Control and multiple working memory components, had changed in response to instruction during middle childhood/early adolescence.

  1. Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.

    PubMed

    Arend, Isabel; Aisenberg, Daniela; Henik, Avishai

    2016-10-01

    In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.

  2. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  3. Writing for the Robot: How Employer Search Tools Have Influenced Resume Rhetoric and Ethics

    ERIC Educational Resources Information Center

    Amare, Nicole; Manning, Alan

    2009-01-01

    To date, business communication scholars and textbook writers have encouraged resume rhetoric that accommodates technology, for example, recommending keyword-enhancing techniques to attract the attention of searchbots: customized search engines that allow companies to automatically scan resumes for relevant keywords. However, few scholars have…

  4. Implementation of the Automated Numerical Model Performance Metrics System

    DTIC Science & Technology

    2011-09-26

    question. As of this writing, the DSRC IBM AIX machines DaVinci and Pascal, and the Cray XT Einstein all use the PBS batch queuing system for...3.3). 12 Appendix A – General Automation System This system provides general purpose tools and a general way to automatically run

  5. Antonio Gramsci on Surrealism and the Avant-Garde

    ERIC Educational Resources Information Center

    San Juan, E., Jr.

    2003-01-01

    In the spring of 1919, Andrp Breton and Phillipe Soupault conducted various experiments in automatic writing. They converted themselves into machines to record the whispers of the unconscious, inspired by Rimbaudes urge for adventure in quest of cosmic knowledge and Lautreamontes conviction of art as a communal enterprise. To destroy bourgeois…

  6. Generating Customized Verifiers for Automatically Generated Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  7. High Frequency Scattering Code in a Distributed Processing Environment

    DTIC Science & Technology

    1991-06-01

    Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3

  8. Evaluating the dimensionality of first grade written composition

    PubMed Central

    Kim, Young-Suk; Al Otaiba, Stephanie; Folsom, Jessica S.; Greulich, Luana; Puranik, Cynthia

    2013-01-01

    Purpose We examined dimensions of written composition using multiple evaluative approaches such as an adapted 6+1 trait scoring, syntactic complexity measures, and productivity measures. We further examined unique relations of oral language and literacy skills to the identified dimensions of written composition. Method A large sample of first grade students (N = 527) was assessed on their language, reading, spelling, letter writing automaticity, and writing in the spring. Data were analyzed using a latent variable approach including confirmatory factor analysis and structural equation modeling. Results The seven traits in the 6+1 trait system were best described as two constructs: substantive quality, and spelling and writing conventions. When the other evaluation procedures such as productivity and syntactic complexity indicators were included, four dimensions emerged: substantive quality, productivity, syntactic complexity, and spelling and writing conventions. Language and literacy predictors were differentially related to each dimension in written composition. Conclusions These four dimensions may be a useful guideline for evaluating developing beginning writer’s compositions. PMID:24687472

  9. How Is Language Used to Craft Social Presence in Facebook? A Case Study of an Undergraduate Writing Course

    ERIC Educational Resources Information Center

    Gordon, Jessica

    2016-01-01

    This quantitative content analysis examines the way social presence was created through original posts and comments in a Facebook group for an undergraduate writing course. The author adapted a well-known coding template and examined how course members--one instructor, two undergraduate teaching assistants and twenty-two students--used language…

  10. The ABC's of Chinese: Maternal Mediation of Pinyin for Chinese Children's Early Literacy Skills

    ERIC Educational Resources Information Center

    McBride-Chang, Catherine; Lin, Dan; Liu, Phil D.; Aram, Dorit; Levin, Iris; Cho, Jeung-Ryeul; Shu, Hua; Zhang, Yuping

    2012-01-01

    In the present study, maternal Pinyin mediation and its relations with young Chinese children's word reading and word writing development were explored. At time 1, 43 Mainland Chinese children and their mothers were videotaped on a task in which children were asked to write 12 words in Pinyin (a phonological coding system used in Mainland China as…

  11. Research Ethics in Sign Language Communities

    ERIC Educational Resources Information Center

    Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.

    2009-01-01

    Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…

  12. Use Them ... or Lose Them? The Case for and against Using QR Codes

    ERIC Educational Resources Information Center

    Cunningham, Chuck; Dull, Cassie

    2011-01-01

    A quick-response (QR) code is a two-dimensional, black-and-white square barcode and links directly to a URL of one's choice. When the code is scanned with a smartphone, it will automatically redirect the user to the designated URL. QR codes are popping up everywhere--billboards, magazines, posters, shop windows, TVs, computer screens, and more.…

  13. Are written and spoken recall of text equivalent?

    PubMed

    Kellogg, Ronald T

    2007-01-01

    Writing is less practiced than speaking, graphemic codes are activated only in writing, and the retrieved representations of the text must be maintained in working memory longer because handwritten output is slower than speech. These extra demands on working memory could result in less effort being given to retrieval during written compared with spoken text recall. To test this hypothesis, college students read or heard Bartlett's "War of the Ghosts" and then recalled the text in writing or speech. Spoken recall produced more accurately recalled propositions and more major distortions (e.g., inferences) than written recall. The results suggest that writing reduces the retrieval effort given to reconstructing the propositions of a text.

  14. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  15. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  16. Enhancement Approachof Object Constraint Language Generation

    NASA Astrophysics Data System (ADS)

    Salemi, Samin; Selamat, Ali

    2018-01-01

    OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.

  17. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  18. An Analysis of Information Assurance Relating to the Department of Defense Radio Frequency Identification (RFID) Passive Network

    DTIC Science & Technology

    2005-03-01

    codes speed up consumer shopping, package shipping, and inventory tracking. RFID offers many advantages over bar codes, as the table below shows...sunlight” (Accenture, 2001, p. 4). Finally, one of the most significant advantages of RFID is the advent of anti-collision. Anti-collision allows an...RFID reader to read and/or write to multiple tags at one time, which is not possible for bar codes. Despite the many advantages RFID over bar codes

  19. The Design of a Secure File Storage System

    DTIC Science & Technology

    1979-12-01

    ERROR _CODE (Przi SUCO COPE) !01ile not found; write access to dtrectorv not permitted I t := GATEKEFPER?.TICKFT ’MAIL BOX, 0) G ATE KF YP F I ~D iNC...BOX.MS’T.SUCC CODE F’OF COD? (DIOR SUCO CODE) Ifile_ not found.; Fead acceLss to directoryv file t ~TRKEPE.TIKFT MIT BOX C) GATHYP~PE-I.AWAIT (MAILBOX, C. (t+2

  20. Exploring Students' Learning Journals with Web-Based Interactive Report Tool

    ERIC Educational Resources Information Center

    Taniguchi, Yuta; Okubo, Fumiya; Shimada, Atsushi; Konomi, Shin'ichi

    2017-01-01

    Students' journal writings could be useful resources for teachers to grasp their understandings and to see their own teaching objectively. However, reading a large number of journals thoroughly is not always realistic for teachers. Although various automatic analysis methods have been proposed to understand learning journals, they does not…

  1. Does Poor Handwriting Conceal Literacy Potential in Primary School Children?

    ERIC Educational Resources Information Center

    McCarney, Debra; Peters, Lynne; Jackson, Sarah; Thomas, Marie; Kirby, Amanda

    2013-01-01

    Handwriting is a complex skill that, despite increasing use of computers, still plays a vital role in education. It is assumed that children will master letter formation at a relatively early stage in their school life, with handwriting fluency developing steadily until automaticity is attained. The capacity theory of writing suggests that as…

  2. How Learners Use Automated Computer-Based Feedback to Produce Revised Drafts of Essays

    ERIC Educational Resources Information Center

    Laing, Jonny; El Ebyary, Khaled; Windeatt, Scott

    2012-01-01

    Our previous results suggest that the use of "Criterion", an automatic writing evaluation (AWE) system, is particularly successful in encouraging learners to produce amended drafts of their essays, and that those amended drafts generally represent an improvement on the original submission. Our analysis of the submitted essays and the…

  3. Team-Based Learning in Honors Science Education: The Benefit of Complex Writing Assignments

    ERIC Educational Resources Information Center

    Wiegant, Fred; Boonstra, Johannes; Peeters, Anton; Scager, Karin

    2012-01-01

    Cooperative learning and team-based learning have been widely recognized as beneficial strategies to improve all levels of education, including higher education. Just forming groups, however, does not automatically lead to better learning and motivation; cooperation flourishes only under appropriate conditions (Fink; Gillies; Parmelee et al.).…

  4. Parenting by Automatic Pilot.

    ERIC Educational Resources Information Center

    O'Callaghan, J. Brien

    This guide on parenting suggests ideas and methods to build self-esteem, courage, decision-making, and loving which are so important to child success and happiness. The introduction notes that this book is written for what seems to be the majority of parents who, despite the availability of much writing and other information on the subject of…

  5. Automating Traceability for Generated Software Artifacts

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Green, Jeffrey

    2004-01-01

    Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.

  6. Funtools: Fits Users Need Tools for Quick, Quantitative Analysis

    NASA Technical Reports Server (NTRS)

    Mandel, Eric; Brederkamp, Joe (Technical Monitor)

    2001-01-01

    The Funtools project arose out of conversations with astronomers about the decline in their software development efforts over the past decade. A stated reason for this decline is that it takes too much effort to master one of the existing FITS libraries simply in order to write a few analysis programs. This problem is exacerbated by the fact that astronomers typically develop new programs only occasionally, and the long interval between coding efforts often necessitates re-learning the FITS interfaces. We therefore set ourselves the goal of developing a minimal buy-in FITS library for researchers who are occasional (but serious) coders. In this case, "minimal buy-in" meant "easy to learn, easy to use, and easy to re-learn next month". Based on conversations with astronomers interested in writing code, we concluded that this goal could be achieved by emphasizing two essential capabilities. The first was the ability to write FITS programs without knowing much about FITS, i.e., without having to deal with the arcane rules for generating a properly formatted FITS file. The second was to support the use of already-familiar C/Unix facilities, especially C structs and Unix stdio. Taken together, these two capabilities would allow researchers to leverage their existing programming expertise while minimizing the need to learn new and complex coding rules.

  7. Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)

    NASA Astrophysics Data System (ADS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian

    2017-08-01

    We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.

  8. Faunus: An object oriented framework for molecular simulation

    PubMed Central

    Lund, Mikael; Trulsson, Martin; Persson, Björn

    2008-01-01

    Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331

  9. Automatic vehicle location system

    NASA Technical Reports Server (NTRS)

    Hansen, G. R., Jr. (Inventor)

    1973-01-01

    An automatic vehicle detection system is disclosed, in which each vehicle whose location is to be detected carries active means which interact with passive elements at each location to be identified. The passive elements comprise a plurality of passive loops arranged in a sequence along the travel direction. Each of the loops is tuned to a chosen frequency so that the sequence of the frequencies defines the location code. As the vehicle traverses the sequence of the loops as it passes over each loop, signals only at the frequency of the loop being passed over are coupled from a vehicle transmitter to a vehicle receiver. The frequencies of the received signals in the receiver produce outputs which together represent a code of the traversed location. The code location is defined by a painted pattern which reflects light to a vehicle carried detector whose output is used to derive the code defined by the pattern.

  10. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  11. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.

    PubMed

    Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  12. The Role of Written Corrective Feedback in Enhancing the Linguistic Accuracy of Iranian Japanese Learners' Writing

    ERIC Educational Resources Information Center

    Shirazi, Masoumeh Ahmadi; Shekarabi, Zeinab

    2014-01-01

    This study is an attempt to investigate the effect of direct and indirect feedback on the writing performance of Iranian learners of Japanese as a foreign language. During one academic semester, three indirect feedback types including underlining, coding and translation were used as well as direct type of feedback in order to see which one makes a…

  13. Literacy and Deaf Students in Taiwan: Issues, Practices and Directions for Future Research--Part II

    ERIC Educational Resources Information Center

    Liu, Hsiu Tan; Andrews, Jean F.; Liu, Chun Jung

    2014-01-01

    In Part I, we underscore the issues surrounding young deaf and hard of hearing (DHH) learners of literacy in Taiwan who use sign to support their learning of Chinese literacy. We also described the linguistic features of Chinese writing and the visual codes used by DHH children. In Part II, we describe the reading and writing practices used with…

  14. Multilingual Practices in Contemporary and Historical Contexts: Interfaces between Code-Switching and Translation

    ERIC Educational Resources Information Center

    Kolehmainen, Leena; Skaffari, Janne

    2016-01-01

    This article serves as an introduction to a collection of four articles on multilingual practices in speech and writing, exploring both contemporary and historical sources. It not only introduces the articles but also discusses the scope and definitions of code-switching, attitudes towards multilingual interaction and, most pertinently, the…

  15. Three Mentor Texts that Support Code-Switching Pedagogies

    ERIC Educational Resources Information Center

    Hill, Dara

    2013-01-01

    This article informs us about the need for facilitating code-switching pedagogies that call for teacher-led scaffolding of students' home languages to negotiate informal and formal contexts for writing and speaking. Varied strategies are guided by three mentor texts the author has conceptualized or enacted in practice and research among middle…

  16. Coding and English Language Teaching

    ERIC Educational Resources Information Center

    Stevens, Vance; Verschoor, Jennifer

    2017-01-01

    According to Dudeney, Hockly, and Pegrum (2013) coding is a deeper skill subsumed under the four main digital literacies of language, connections, information, and (re)design. Coders or programmers are people who write the programmes behind everything we see and do on a computer. Most students spend several hours playing online games, but few know…

  17. Colorful Revision: Color-Coded Comments Connected to Instruction

    ERIC Educational Resources Information Center

    Mack, Nancy

    2013-01-01

    Many teachers have had a favorable response to their experimentation with digital feedback on students' writing. Students much preferred a simpler system of highlighting and commenting in color. After experimentation the author found that this color-coded system was more effective for them and less time-consuming for her. Of course, any system…

  18. Automatic Classification of Medical Text: The Influence of Publication Form1

    PubMed Central

    Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.

    1988-01-01

    Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.

  19. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.

    PubMed

    Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen

    2010-12-21

    There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.

  20. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. George L Mesina

    Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computermore » hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It establishes documentation guidance on internal comments. The guidelines apply to both existing and new subprograms. They are written for both FORTRAN 77 and FORTRAN 95. The guidelines are not so rigorous as to inhibit a programmer’s unique style, but do restrict the variations in acceptable coding to create sufficient commonality that new readers will find the coding in each new subroutine familiar. It is recognized that this is a “living” document and must be updated as languages, compilers, and computer hardware and software evolve.« less

  2. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  3. Research on Automatic Programming

    DTIC Science & Technology

    1975-12-31

    Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is

  4. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  5. Motor control of handwriting in the developing brain: A review.

    PubMed

    Palmis, Sarah; Danna, Jeremy; Velay, Jean-Luc; Longcamp, Marieke

    This review focuses on the acquisition of writing motor aspects in adults, and in 5-to 12-year-old children without learning disabilities. We first describe the behavioural aspects of adult writing and dominant models based on the notion of motor programs. We show that handwriting acquisition is characterized by the transition from reactive movements programmed stroke-by-stroke in younger children, to an automatic control of the whole trajectory when the motor programs are memorized at about 10 years old. Then, we describe the neural correlates of adult writing, and the changes that could occur with learning during childhood. The acquisition of a new skill is characterized by the involvement of a network more restricted in space and where neural specificity is increased in key regions. The cerebellum and the left dorsal premotor cortex are of fundamental importance in motor learning, and could be at the core of the acquisition of handwriting.

  6. Multiblock grid generation with automatic zoning

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1995-01-01

    An overview will be given for multiblock grid generation with automatic zoning. We shall explore the many advantages and benefits of this exciting technology and will also see how to apply it to a number of interesting cases. The technology is available in the form of a commercial code, GridPro(registered trademark)/az3000. This code takes surface geometry definitions and patterns of points as its primary input and produces high quality grids as its output. Before we embark upon our exploration, we shall first give a brief background of the environment in which this technology fits.

  7. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  8. An Expert System for the Development of Efficient Parallel Code

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  9. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  10. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  11. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  12. Vaccine Hesitancy in Discussion Forums: Computer-Assisted Argument Mining with Topic Models.

    PubMed

    Skeppstedt, Maria; Kerren, Andreas; Stede, Manfred

    2018-01-01

    Arguments used when vaccination is debated on Internet discussion forums might give us valuable insights into reasons behind vaccine hesitancy. In this study, we applied automatic topic modelling on a collection of 943 discussion posts in which vaccine was debated, and six distinct discussion topics were detected by the algorithm. When manually coding the posts ranked as most typical for these six topics, a set of semantically coherent arguments were identified for each extracted topic. This indicates that topic modelling is a useful method for automatically identifying vaccine-related discussion topics and for identifying debate posts where these topics are discussed. This functionality could facilitate manual coding of salient arguments, and thereby form an important component in a system for computer-assisted coding of vaccine-related discussions.

  13. Bypassing Races in Live Applications with Execution Filters

    DTIC Science & Technology

    2010-01-01

    LOOM creates the needed locks and semaphores on demand. The first time a lock or semaphore is refer- enced by one of the inserted synchronization ...runtime. LOOM provides a flexible and safe language for develop- ers to write execution filters that explicitly synchronize code. It then uses an...first compile their application with LOOM. At runtime, to workaround a race, an application developer writes an execution filter that synchronizes the

  14. Professional Content Knowledge of Grades One--Three Teachers in Sweden for Reading and Writing Instruction: Language Structures, Code Concepts, and Spelling Rules

    ERIC Educational Resources Information Center

    Alatalo, Tarja

    2016-01-01

    In this study, Swedish teachers of grades 1-3, with various teacher-training backgrounds, were tested to determine if they have the requisite awareness of language elements and the way these elements are represented in writing. The results were poor, yet the indication was that teachers with a good educational background in literacy and a good…

  15. Method and apparatus for automatically generating airfoil performance tables

    NASA Technical Reports Server (NTRS)

    van Dam, Cornelis P. (Inventor); Mayda, Edward A. (Inventor); Strawn, Roger Clayton (Inventor)

    2006-01-01

    One embodiment of the present invention provides a system that facilitates automatically generating a performance table for an object, wherein the object is subject to fluid flow. The system operates by first receiving a description of the object and testing parameters for the object. The system executes a flow solver using the testing parameters and the description of the object to produce an output. Next, the system determines if the output of the flow solver indicates negative density or pressure. If not, the system analyzes the output to determine if the output is converging. If converging, the system writes the output to the performance table for the object.

  16. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  17. Reforming Federal Student Loan Repayment: A Single, Automatic, Income-Driven System

    ERIC Educational Resources Information Center

    Baum, Sandy; Chingos, Matthew

    2017-01-01

    The federal role in higher education has grown over the past two decades, and now a new administration has the opportunity to strengthen policies that support students and their colleges and universities. To help inform these decisions, the Urban Institute convened a bipartisan group of scholars and policy advisers to write a series of memos…

  18. Web-Based Essay Critiquing System and EFL Students' Writing: A Quantitative and Qualitative Investigation

    ERIC Educational Resources Information Center

    Lee, Cynthia; Wong, Kelvin C. K.; Cheung, William K.; Lee, Fion S. L.

    2009-01-01

    The paper first describes a web-based essay critiquing system developed by the authors using latent semantic analysis (LSA), an automatic text analysis technique, to provide students with immediate feedback on content and organisation for revision whenever there is an internet connection. It reports on its effectiveness in enhancing adult EFL…

  19. Iterative Design and Classroom Evaluation of Automated Formative Feedback for Improving Peer Feedback Localization

    ERIC Educational Resources Information Center

    Nguyen, Huy; Xiong, Wenting; Litman, Diane

    2017-01-01

    A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…

  20. Towards the Automatic Generation of Programmed Foreign-Language Instructional Materials.

    ERIC Educational Resources Information Center

    Van Campen, Joseph A.

    The purpose of this report is to describe a set of programs which either perform certain tasks useful in the generation of programed foreign-language instructional material or facilitate the writing of such task-oriented programs by other researchers. The programs described are these: (1) a PDP-10 assembly language program for the selection from a…

  1. The Effects of Single and Dual Coded Multimedia Instructional Methods on Chinese Character Learning

    ERIC Educational Resources Information Center

    Wang, Ling

    2013-01-01

    Learning Chinese characters is a difficult task for adult English native speakers due to the significant differences between the Chinese and English writing system. The visuospatial properties of Chinese characters have inspired the development of instructional methods using both verbal and visual information based on the Dual Coding Theory. This…

  2. Explorations in Policy Enactment: Feminist Thought Experiments with Basil Bernstein's Code Theory

    ERIC Educational Resources Information Center

    Singh, Parlo; Pini, Barbara; Glasswell, Kathryn

    2018-01-01

    This paper builds on feminist elaborations of Bernstein's code theory to engage in a series of thought experiments with interview data produced during a co-inquiry design-based research intervention project. It presents three accounts of thinking/writing with data. Our purpose in presenting three different accounts of interview data is to…

  3. Peregrine System User Basics | High-Performance Computing | NREL

    Science.gov Websites

    peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y

  4. Numbers can move our hands: a spatial representation effect in digits handwriting.

    PubMed

    Perrone, Gelsomina; de Hevia, Maria Dolores; Bricolo, Emanuela; Girelli, Luisa

    2010-09-01

    The interaction between numbers and action-related processes is currently one of the most investigated topics in numerical cognition. The present study contributes to this line of research by investigating, for the first time, the effects of number on an overlearned complex motor plan that does not require explicit lateralised movements or strict spatial constrains: spontaneous handwriting. In particular, we investigated whether the spatial mapping of numbers interferes with the motor planning involved in writing. To this aim, participants' spontaneous handwriting of single digits (Exp. 1) and letters (Exp. 2) was recorded with a digitising tablet. We show that the writing of numbers is characterised by a spatial dislocation of the digits as a function of their magnitude, i.e., small numbers were written leftwards relative to large numbers. In contrast, the writing of letters showed a null or marginal effect with respect to their dislocation on the writing area. These findings show that the automatic mapping of numbers into space interacts with action planning by modulating specific motor parameters in spontaneous handwriting.

  5. Final report for''FOSPACK''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruge, J W; Dean, D

    2000-11-20

    The goal of this subcontract was to modify the FOSPACK code, developed by John Ruge, to call the BoomerAMG solver developed at LLNL through the HYPRE interface. FOSPACK is a package developed for the automatic discretization and solution of First-Order System Least-Squares (FOSLS) formulations of 2D partial differential equations (c.f [3-9]). FOSPACK takes a user-specified mesh (which can be an unstructured combination of triangular and quadrilateral elements) and specification of the first-order system, and produces the discretizations needed for solution. Generally, all specifications are contained in data files, so no re-compilation is necessary when changing domains, mesh sizes, problems, etc.more » Much of the work in FOSPACK has gone into an interpreter that allows for simple, intuitive specification of the equations. The interpreter reads the equations, processes them, and stores them as instruction lists needed to apply the operators involved to finite element basis functions, allowing assembly of the discrete system. Quite complex equations may be specified, including variable coefficients, user defined functions, and vector notation. The first-order systems may be nonlinear, with linearizations either performed automatically, or specified in a convenient way by the user. The program also includes global/local refinement capability. FOSLS formulations are very well suited for solution by algebraic multigrid (AMG) (c.f. [10-13]). The original version uses a version of algebraic multigrid written by John Ruge in FORTRAN 77, and modified somewhat for use with FOSPACK. BoomerAMG, a version of AMG developed at CASC, has a number of advantages over the FORTRAN version, including dynamic memory allocation and parallel capability. This project was to benefit both FRSC and CASC, giving FOSPACK the advantages of BoomerAMG, while giving CASC a tool for testing FOSLS as a discretization method for problems of interest there. The major parts of this work were implementation and testing of the HYPRE package on our computers, writing a C wrapper/driver for the FOSPACK code, and modifying the wrapper to call BoomerAMG through the HYPRE interface.« less

  6. Functional Anatomy of Writing with the Dominant Hand

    PubMed Central

    Najee-ullah, Muslimah ‘Ali; Hallett, Mark

    2013-01-01

    While writing performed by any body part is similar in style, indicating a common program, writing with the dominant hand is particularly skilled. We hypothesized that this skill utilizes a special motor network supplementing the motor equivalence areas. Using functional magnetic resonance imaging in 13 normal subjects, we studied nine conditions: writing, zigzagging and tapping, each with the right hand, left hand and right foot. We identified brain regions activated with the right (dominant) hand writing task, exceeding the activation common to right-hand use and the writing program, both identified without right-hand writing itself. Right-hand writing significantly differed from the other tasks. First, we observed stronger activations in the left dorsal prefrontal cortex, left intraparietal sulcus and right cerebellum. Second, the left anterior putamen was required to initiate all the tested tasks, but only showed sustained activation during the right-hand writing condition. Lastly, an exploratory analysis showed clusters in the left ventral premotor cortex and inferior and superior parietal cortices were only significantly active for right-hand writing. The increased activation with right-hand writing cannot be ascribed to increased effort, since this is a well-practiced task much easier to perform than some of the other tasks studied. Because parietal-premotor connections code for particular skills, it would seem that the parietal and premotor regions, together with basal ganglia-sustained activation likely underlie the special skill of handwriting with the dominant hand. PMID:23844132

  7. Functional anatomy of writing with the dominant hand.

    PubMed

    Horovitz, Silvina G; Gallea, Cecile; Najee-Ullah, Muslimah 'ali; Hallett, Mark

    2013-01-01

    While writing performed by any body part is similar in style, indicating a common program, writing with the dominant hand is particularly skilled. We hypothesized that this skill utilizes a special motor network supplementing the motor equivalence areas. Using functional magnetic resonance imaging in 13 normal subjects, we studied nine conditions: writing, zigzagging and tapping, each with the right hand, left hand and right foot. We identified brain regions activated with the right (dominant) hand writing task, exceeding the activation common to right-hand use and the writing program, both identified without right-hand writing itself. Right-hand writing significantly differed from the other tasks. First, we observed stronger activations in the left dorsal prefrontal cortex, left intraparietal sulcus and right cerebellum. Second, the left anterior putamen was required to initiate all the tested tasks, but only showed sustained activation during the right-hand writing condition. Lastly, an exploratory analysis showed clusters in the left ventral premotor cortex and inferior and superior parietal cortices were only significantly active for right-hand writing. The increased activation with right-hand writing cannot be ascribed to increased effort, since this is a well-practiced task much easier to perform than some of the other tasks studied. Because parietal-premotor connections code for particular skills, it would seem that the parietal and premotor regions, together with basal ganglia-sustained activation likely underlie the special skill of handwriting with the dominant hand.

  8. Impaired Retention of Motor Learning of Writing Skills in Patients with Parkinson's Disease with Freezing of Gait.

    PubMed

    Heremans, Elke; Nackaerts, Evelien; Vervoort, Griet; Broeder, Sanne; Swinnen, Stephan P; Nieuwboer, Alice

    2016-01-01

    Patients with Parkinson's disease (PD) and freezing of gait (FOG) suffer from more impaired motor and cognitive functioning than their non-freezing counterparts. This underlies an even higher need for targeted rehabilitation programs in this group. However, so far it is unclear whether FOG affects the ability for consolidation and generalization of motor learning and thus the efficacy of rehabilitation. To investigate the hallmarks of motor learning in people with FOG compared to those without by comparing the effects of an intensive motor learning program to improve handwriting. Thirty five patients with PD, including 19 without and 16 with FOG received six weeks of handwriting training consisting of exercises provided on paper and on a touch-sensitive writing tablet. Writing training was based on single- and dual-task writing and was supported by means of visual target zones. To investigate automatization, generalization and retention of learning, writing performance was assessed before and after training in the presence and absence of cues and dual tasking and after a six-week retention period. Writing amplitude was measured as primary outcome measure and variability of writing and dual-task accuracy as secondary outcomes. Significant learning effects were present on all outcome measures in both groups, both for writing under single- and dual-task conditions. However, the gains in writing amplitude were not retained after a retention period of six weeks without training in the patient group without FOG. Furthermore, patients with FOG were highly dependent on the visual target zones, reflecting reduced generalization of learning in this group. Although short-term learning effects were present in both groups, generalization and retention of motor learning were specifically impaired in patients with PD and FOG. The results of this study underscore the importance of individualized rehabilitation protocols.

  9. A Model of Human Cognitive Behavior in Writing Code for Computer Programs. Volume 1

    DTIC Science & Technology

    1975-05-01

    nearly all programming languages, each line of code actually involves a great many decisions - basic statement types, variable and expression choices...labels, etc. - and any heuristic which evaluates code on the basis of a single decision is not likely to have sufficient power. Only the use of plans...recalculated in the following line because It was needed again. The second reason is that there are some decisions about the structure of a program

  10. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  11. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jim Michael

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit withinmore » xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.« less

  12. Artistic production in dyslectic children.

    PubMed

    Cohn, R; Neumann, M A

    1977-01-01

    In the study of children with language problems, particularly in reading and writing, it has been observed that some have an outstanding ability to produce artistic pictures and objects. These productions are perceptive, well organized and generally contain much action. Despite their pictorial skill these patients may have only a rudimentary use of coded symbolic graphic forms. Others display moderate ability in reading and writing. These patients frequently have the disorganized overacctive behavior and the motor clumsiness that is so common in the dyslectic child; some, however, are biologically effective. From this material we entertain the hypothesis that picture (artistic) productions are generated by the sub-dominant cerebral hemisphere, and that this function is quite distinct from the coded graphic operations resident in the dominant hemisphere. If this hypothesis is correct, it would seem socially benefical to allow these patients to develop their unique artistic ability to its full capacity, and not to overemphasize the correction of the disturbed coded symbol operations in remedial training.

  13. A procedure for automating CFD simulations of an inlet-bleed problem

    NASA Technical Reports Server (NTRS)

    Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.

    1995-01-01

    A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.

  14. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  15. Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.; Semmel, Glenn S.

    2008-01-01

    The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.

  16. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  17. 77 FR 66601 - Electronic Tariff Filings; Notice of Change to eTariff Type of Filing Codes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Tariff Filings; Notice of Change to eTariff Type of Filing Codes Take notice that, effective November 18, 2012, the list of available eTariff Type of Filing Codes (TOFC) will be modified to include a new TOFC... Energy's regulations. Tariff records included in such filings will be automatically accepted to be...

  18. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  19. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  20. New Modular Ultrasonic Signal Processing Building Blocks for Real-Time Data Acquisition and Post Processing

    NASA Astrophysics Data System (ADS)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion

    2003-03-01

    A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.

  1. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  2. The effects of dual tasking on handwriting in patients with Parkinson's disease.

    PubMed

    Broeder, S; Nackaerts, E; Nieuwboer, A; Smits-Engelsman, B C M; Swinnen, S P; Heremans, E

    2014-03-28

    Previous studies have shown that patients with Parkinson's disease (PD) experience extensive problems during dual tasking. Up to now, dual-task interference in PD has mainly been investigated in the context of gait research. However, the simultaneous performance of two different tasks is also a prerequisite to efficiently perform many other tasks in daily life, including upper limb tasks. To address this issue, this study investigated the effect of a secondary cognitive task on the performance of handwriting in patients with PD. Eighteen PD patients and 11 age-matched controls performed a writing task involving the production of repetitive loops under single- and dual-task conditions. The secondary task consisted of counting high and low tones during writing. The writing tests were performed with two amplitudes (0.6 and 1.0cm) using a writing tablet. Results showed that dual-task performance was affected in PD patients versus controls. Dual tasking reduced writing amplitude in PD patients, but not in healthy controls (p=0.046). Patients' writing size was mainly reduced during the small-amplitude condition (small amplitude p=0.017; large amplitude p=0.310). This suggests that the control of writing at small amplitudes requires more compensational brain-processing recourses in PD and is as such less automatic than writing at large amplitudes. In addition, there was a larger dual-task effect on the secondary task in PD patients than controls (p=0.025). The writing tests on the writing tablet proved highly correlated to daily life writing as measured by the 'Systematic Screening of Handwriting Difficulties' test (SOS-test) and other manual dexterity tasks, particularly during dual-task conditions. Taken together, these results provide additional insights into the motor control of handwriting and the effects of dual tasking during upper limb movements in patients with PD. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Language development in a non-vocal child.

    PubMed

    Rogow, S M

    1994-01-01

    Many children who cannot speak, comprehend both oral and written language. Having knowledge of language is not the same as being able to use language for social transactions. Non-vocal children learn to use augmented and assisted systems, but they experience specific difficulties in initiating and maintaining conversations and making use of the pragmatic functions of language. The purpose of this study was to investigate the semantic and syntactic knowledge of a child with severe multiple disabilities who can read and write and comprehend two languages, but does not initiate conversation. The study demonstrates that high levels of language comprehension and ability to read and write do not automatically transfer to conversational competence or narrative ability.

  4. Prototype Automatic Target Screener.

    DTIC Science & Technology

    1980-05-19

    JLIST OF TABLES I Table Page 1 PATS Modules 4 2 Vector Read/Write Command Format ( SEL4 ) 29 1 3 Read Vector Data Command Format ( SEL4 ) 30 J 4 Use Matrix...VECTOR READ/WRITE COMMAND FORMAT ( SEL4 ) S 1,4A Output 15 14 1:3 12 11 10 9 8 7 6 5 4 3 2 1 0 Da taI To VNUM VDIR V LEN InterfaceIT TNT = 1 Intensify...elements ! | 29 I TABLE 3. READ VECTOR DATA COMMAND FORMAT ( SEL4 ) SEL4 Read Vector Data Input 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Da ta D D V To 0 A D

  5. Accuracy of automatic syndromic classification of coded emergency department diagnoses in identifying mental health-related presentations for public health surveillance.

    PubMed

    Liljeqvist, Henning T G; Muscatello, David; Sara, Grant; Dinh, Michael; Lawrence, Glenda L

    2014-09-23

    Syndromic surveillance in emergency departments (EDs) may be used to deliver early warnings of increases in disease activity, to provide situational awareness during events of public health significance, to supplement other information on trends in acute disease and injury, and to support the development and monitoring of prevention or response strategies. Changes in mental health related ED presentations may be relevant to these goals, provided they can be identified accurately and efficiently. This study aimed to measure the accuracy of using diagnostic codes in electronic ED presentation records to identify mental health-related visits. We selected a random sample of 500 records from a total of 1,815,588 ED electronic presentation records from 59 NSW public hospitals during 2010. ED diagnoses were recorded using any of ICD-9, ICD-10 or SNOMED CT classifications. Three clinicians, blinded to the automatically generated syndromic grouping and each other's classification, reviewed the triage notes and classified each of the 500 visits as mental health-related or not. A "mental health problem presentation" for the purposes of this study was defined as any ED presentation where either a mental disorder or a mental health problem was the reason for the ED visit. The combined clinicians' assessment of the records was used as reference standard to measure the sensitivity, specificity, and positive and negative predictive values of the automatic classification of coded emergency department diagnoses. Agreement between the reference standard and the automated coded classification was estimated using the Kappa statistic. Agreement between clinician's classification and automated coded classification was substantial (Kappa = 0.73. 95% CI: 0.58 - 0.87). The automatic syndromic grouping of coded ED diagnoses for mental health-related visits was found to be moderately sensitive (68% 95% CI: 46%-84%) and highly specific at 99% (95% CI: 98%-99.7%) when compared with the reference standard in identifying mental health related ED visits. Positive predictive value was 81% (95% CI: 0.57 - 0.94) and negative predictive value was 98% (95% CI: 0.97-0.99). Mental health presentations identified using diagnoses coded with various classifications in electronic ED presentation records offers sufficient accuracy for application in near real-time syndromic surveillance.

  6. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  7. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  8. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  9. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  10. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  11. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  12. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  13. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    PubMed

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  14. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions

    PubMed Central

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075

  15. Exploring the read-write genome: mobile DNA and mammalian adaptation.

    PubMed

    Shapiro, James A

    2017-02-01

    The read-write genome idea predicts that mobile DNA elements will act in evolution to generate adaptive changes in organismal DNA. This prediction was examined in the context of mammalian adaptations involving regulatory non-coding RNAs, viviparous reproduction, early embryonic and stem cell development, the nervous system, and innate immunity. The evidence shows that mobile elements have played specific and sometimes major roles in mammalian adaptive evolution by generating regulatory sites in the DNA and providing interaction motifs in non-coding RNA. Endogenous retroviruses and retrotransposons have been the predominant mobile elements in mammalian adaptive evolution, with the notable exception of bats, where DNA transposons are the major agents of RW genome inscriptions. A few examples of independent but convergent exaptation of mobile DNA elements for similar regulatory rewiring functions are noted.

  16. The roles of engineering notebooks in shaping elementary engineering student discourse and practice

    NASA Astrophysics Data System (ADS)

    Hertel, Jonathan D.; Cunningham, Christine M.; Kelly, Gregory J.

    2017-06-01

    Engineering design challenges offer important opportunities for students to learn science and engineering knowledge and practices. This study examines how students' engineering notebooks across four units of the curriculum Engineering is Elementary (EiE) support student work during design challenges. Through educational ethnography and discourse analysis, transcripts of student talk and action were created and coded around the uses of notebooks in the accomplishment of engineering tasks. Our coding process identified two broad categories of roles of the notebooks: they scaffold student activity and support epistemic practices of engineering. The study showed the importance of prompts to engage students in effective uses of writing, the roles the notebook assumes in the students' small groups, and the ways design challenges motivate children to write and communicate.

  17. Understanding and Writing G & M Code for CNC Machines

    ERIC Educational Resources Information Center

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  18. "Who Soy Yo?": The Creative Use of "Spanglish" to Express a Hybrid Identity in Chicana/o Heritage Language Learners of Spanish

    ERIC Educational Resources Information Center

    Sanchez-Munoz, Ana

    2013-01-01

    This study explores various linguistic strategies that characterize what is commonly referred to as "Spanglish"; namely, code-switching, code-mixing, borrowings and other language contact phenomena commonly employed by Chicana/o bilinguals. The analysis of linguistic features is based on creative pieces of writing produced by Chicana/o…

  19. Patterns of Revision in Online Writing: A Study of Wikipedia's Featured Articles

    ERIC Educational Resources Information Center

    Jones, John

    2008-01-01

    This study examines the revision histories of 10 Wikipedia articles nominated for the site's Featured Article Class (FAC), its highest quality rating, 5 of which achieved FAC and 5 of which did not. The revisions to each article were coded, and the coding results were combined with a descriptive analysis of two representative articles in order to…

  20. GOAL - A test engineer oriented language. [Ground Operations Aerospace Language for coding automatic test

    NASA Technical Reports Server (NTRS)

    Mitchell, T. R.

    1974-01-01

    The development of a test engineer oriented language has been under way at the Kennedy Space Center for several years. The result of this effort is the Ground Operations Aerospace Language, GOAL, a self-documenting, high-order language suitable for coding automatic test, checkout and launch procedures. GOAL is a highly readable, writable, retainable language that is easily learned by nonprogramming oriented engineers. It is sufficiently powerful for use at all levels of Space Shuttle ground processing, from line replaceable unit checkout to integrated launch day operations. This paper will relate the language development, and describe GOAL and its applications.

  1. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.

  2. Small passenger car transmission test: Mercury Lynx ATX transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1981-01-01

    The testing of a Mercury Lynx automatic transmission is reported. The transmission was tested in accordance with a passenger car automatic transmission test code (SAE J65lb) which required drive performance, coast performance, and no load test conditions. Under these conditions, the transmission attained maximum efficiencies in the mid-ninety percent range both for drive performance test and coast performance tests. The torque, speed, and efficiency curves are presented, which provide the complete performance characteristics for the Mercury Lynx automatic transmission.

  3. How Can Writing Tasks Be Characterized in a Way Serving Pedagogical Goals and Automatic Analysis Needs?

    ERIC Educational Resources Information Center

    Quixal, Martí; Meurers, Detmar

    2016-01-01

    The paper tackles a central question in the field of Intelligent Computer-Assisted Language Learning (ICALL): How can language learning tasks be conceptualized and made explicit in a way that supports the pedagogical goals of current Foreign Language Teaching and Learning and at the same time provides an explicit characterization of the Natural…

  4. Tales of the Expected: The Influence of Students' Expectations on Question Validity and Implications for Writing Exam Questions

    ERIC Educational Resources Information Center

    Crisp, Victoria; Sweiry, Ezekiel; Ahmed, Ayesha; Pollitt, Alastair

    2008-01-01

    Background: Through classroom preparation and exposure to past papers, textbooks and practice tests students develop expectations about examinations: what will be asked, how it will be asked and how they will be judged. Expectations are also involved in the automatic process of understanding questions. Where a question and a student's expectations…

  5. Automated Assessment of Non-Native Learner Essays: Investigating the Role of Linguistic Features

    ERIC Educational Resources Information Center

    Vajjala, Sowmya

    2018-01-01

    Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in…

  6. Recurrent Word Combinations in Academic Writing by Native and Non-Native Speakers of English: A Lexical Bundles Approach

    ERIC Educational Resources Information Center

    Adel, Annelie; Erman, Britt

    2012-01-01

    In order for discourse to be considered idiomatic, it needs to exhibit features like fluency and pragmatically appropriate language use. Advances in corpus linguistics make it possible to examine idiomaticity from the perspective of recurrent word combinations. One approach to capture such word combinations is by the automatic retrieval of lexical…

  7. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  9. Priority coding for control room alarms

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    Indicating the priority of a spatially fixed, activated alarm tile on an alarm tile array by a shape coding at the tile, and preferably using the same shape coding wherever the same alarm condition is indicated elsewhere in the control room. The status of an alarm tile can change automatically or by operator acknowledgement, but tones and/or flashing cues continue to provide status information to the operator.

  10. Motor automaticity in Parkinson’s disease

    PubMed Central

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  11. 48 CFR 25.401 - Exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Disabled; and (5) Other acquisitions not using full and open competition, if authorized by Subpart 6.2 or 6... table: The service(Federal Service Codes from the Federal Procurement Data System Product/Service Code... military services overseas. X X X X (2) (i) Automatic data processing (ADP) telecommunications and...

  12. The SENSEI Generic In Situ Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayachit, Utkarsh; Whitlock, Brad; Wolf, Matthew

    The SENSEI generic in situ interface is an API that promotes code portability and reusability. From the simulation view, a developer can instrument their code with the SENSEI API and then make make use of any number of in situ infrastructures. From the method view, a developer can write an in situ method using the SENSEI API, then expect it to run in any number of in situ infrastructures, or be invoked directly from a simulation code, with little or no modification. This paper presents the design principles underlying the SENSEI generic interface, along with some simplified coding examples.

  13. How does the interaction between spelling and motor processes build up during writing acquisition?

    PubMed

    Kandel, Sonia; Perret, Cyril

    2015-03-01

    How do we recall a word's spelling? How do we produce the movements to form the letters of a word? Writing involves several processing levels. Surprisingly, researchers have focused either on spelling or motor production. However, these processes interact and cannot be studied separately. Spelling processes cascade into movement production. For example, in French, producing letters PAR in the orthographically irregular word PARFUM (perfume) delays motor production with respect to the same letters in the regular word PARDON (pardon). Orthographic regularity refers to the possibility of spelling a word correctly by applying the most frequent sound-letter conversion rules. The present study examined how the interaction between spelling and motor processing builds up during writing acquisition. French 8-10 year old children participated in the experiment. This is the age handwriting skills start to become automatic. The children wrote regular and irregular words that could be frequent or infrequent. They wrote on a digitizer so we could collect data on latency, movement duration and fluency. The results revealed that the interaction between spelling and motor processing was present already at age 8. It became more adult-like at ages 9 and 10. Before starting to write, processing irregular words took longer than regular words. This processing load spread into movement production. It increased writing duration and rendered the movements more dysfluent. Word frequency affected latencies and cascaded into production. It modulated writing duration but not movement fluency. Writing infrequent words took longer than frequent words. The data suggests that orthographic regularity has a stronger impact on writing than word frequency. They do not cascade in the same extent. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A feeling of flow: exploring junior scientists' experiences with dictation of scientific articles.

    PubMed

    Spanager, Lene; Danielsen, Anne Kjaergaard; Pommergaard, Hans-Christian; Burcharth, Jakob; Rosenberg, Jacob

    2013-08-10

    Science involves publishing results, but many scientists do not master this. We introduced dictation as a method of producing a manuscript draft, participating in writing teams and attending a writing retreat to junior scientists in our department. This study aimed to explore the scientists' experiences with this process. Four focus group interviews were conducted and comprised all participating scientists (n = 14). Each transcript was transcribed verbatim and coded independently by two interviewers. The coding structure was discussed until consensus and from this the emergent themes were identified. Participants were 7 PhD students, 5 scholarship students and 2 clinical research nurses. Three main themes were identified: 'Preparing and then letting go' indicated that dictating worked best when properly prepared. 'The big dictation machine' described benefits of writing teams when junior scientists got feedback on both content and structure of their papers. 'Barriers to and drivers for participation' described flow-like states that participants experienced during the dictation. Motivation and a high level of preparation were pivotal to be able to dictate a full article in one day. The descriptions of flow-like states seemed analogous to the theoretical model of flow which is interesting, as flow is usually deemed a state reserved to skilled experts. Our findings suggest that other academic groups might benefit from using the concept including dictation of manuscripts to encourage participants' confidence in their writing skills.

  15. Tools for Rapid Understanding of Malware Code

    DTIC Science & Technology

    2015-05-07

    cloaking techniques. We used three malware detectors, covering a wide spectrum of detection technologies, for our experiments: VirusTotal, an online ...Analysis and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic...and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic

  16. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  17. Semantic Modelling of Digital Forensic Evidence

    NASA Astrophysics Data System (ADS)

    Kahvedžić, Damir; Kechadi, Tahar

    The reporting of digital investigation results are traditionally carried out in prose and in a large investigation may require successive communication of findings between different parties. Popular forensic suites aid in the reporting process by storing provenance and positional data but do not automatically encode why the evidence is considered important. In this paper we introduce an evidence management methodology to encode the semantic information of evidence. A structured vocabulary of terms, ontology, is used to model the results in a logical and predefined manner. The descriptions are application independent and automatically organised. The encoded descriptions aim to help the investigation in the task of report writing and evidence communication and can be used in addition to existing evidence management techniques.

  18. Evaluation of a computer-based prompting intervention to improve essay writing in undergraduates with cognitive impairment after acquired brain injury.

    PubMed

    Ledbetter, Alexander K; Sohlberg, McKay Moore; Fickas, Stephen F; Horney, Mark A; McIntosh, Kent

    2017-11-06

    This study evaluated a computer-based prompting intervention for improving expository essay writing after acquired brain injury (ABI). Four undergraduate participants aged 18-21 with mild-moderate ABI and impaired fluid cognition at least 6 months post-injury reported difficulty with the writing process after injury. The study employed a non-concurrent multiple probe across participants, in a single-case design. Outcome measures included essay quality scores and number of revisions to writing counted then coded by type using a revision taxonomy. An inter-scorer agreement procedure was completed for quality scores for 50% of essays, with data indicating that agreement exceeded a goal of 85%. Visual analysis of results showed increased essay quality for all participants in intervention phase compared with baseline, maintained 1 week after. Statistical analyses showed statistically significant results for two of the four participants. The authors discuss external cuing for self-monitoring and tapping of existing writing knowledge as possible explanations for improvement. The study provides preliminary evidence that computer-based prompting has potential to improve writing quality for undergraduates with ABI.

  19. Automatic classification of blank substrate defects

    NASA Astrophysics Data System (ADS)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask Technology Center (MPMask). The Calibre ADC tool was qualified on production mask blanks against the manual classification. The classification accuracy of ADC is greater than 95% for critical defects with an overall accuracy of 90%. The sensitivity to weak defect signals and locating the defect in the images is a challenge we are resolving. The performance of the tool has been demonstrated on multiple mask types and is ready for deployment in full volume mask manufacturing production flow. Implementation of Calibre ADC is estimated to reduce the misclassification of critical defects by 60-80%.

  20. Using the GeoFEST Faulted Region Simulation System

    NASA Technical Reports Server (NTRS)

    Parker, Jay W.; Lyzenga, Gregory A.; Donnellan, Andrea; Judd, Michele A.; Norton, Charles D.; Baker, Teresa; Tisdale, Edwin R.; Li, Peggy

    2004-01-01

    GeoFEST (the Geophysical Finite Element Simulation Tool) simulates stress evolution, fault slip and plastic/elastic processes in realistic materials, and so is suitable for earthquake cycle studies in regions such as Southern California. Many new capabilities and means of access for GeoFEST are now supported. New abilities include MPI-based cluster parallel computing using automatic PYRAMID/Parmetis-based mesh partitioning, automatic mesh generation for layered media with rectangular faults, and results visualization that is integrated with remote sensing data. The parallel GeoFEST application has been successfully run on over a half-dozen computers, including Intel Xeon clusters, Itanium II and Altix machines, and the Apple G5 cluster. It is not separately optimized for different machines, but relies on good domain partitioning for load-balance and low communication, and careful writing of the parallel diagonally preconditioned conjugate gradient solver to keep communication overhead low. Demonstrated thousand-step solutions for over a million finite elements on 64 processors require under three hours, and scaling tests show high efficiency when using more than (order of) 4000 elements per processor. The source code and documentation for GeoFEST is available at no cost from Open Channel Foundation. In addition GeoFEST may be used through a browser-based portal environment available to approved users. That environment includes semi-automated geometry creation and mesh generation tools, GeoFEST, and RIVA-based visualization tools that include the ability to generate a flyover animation showing deformations and topography. Work is in progress to support simulation of a region with several faults using 16 million elements, using a strain energy metric to adapt the mesh to faithfully represent the solution in a region of widely varying strain.

  1. Binary translation using peephole translation rules

    DOEpatents

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  2. 75 FR 80677 - The Low-Income Definition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... original regulatory text so it is consistent with the geo-coding software the agency uses to make the low... Union Act (Act) authorizes the NCUA Board (Board) to define ``low-income members'' so that credit unions... process of implementing geo- coding software to make the calculation automatically for credit unions...

  3. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  4. Degrees of systematic thoroughness: A text analysis of student technical science writing

    NASA Astrophysics Data System (ADS)

    Esch, Catherine Julia

    This dissertation investigates student technical science writing and use of evidence. Student writers attended a writing-intensive undergraduate university oceanography course where they were required to write a technical paper drawing from an instructor-designed software program, Our Dynamic Planet. This software includes multiple interactive geological data sets relevant to plate tectonics. Through qualitative text analysis of students science writing, two research questions frame the study asking: How are the papers textually structured? Are there distinctions between high- and low-rated papers? General and specific text characteristics within three critical sections of the technical paper are identified and analyzed (Observations, Interpretations, Conclusions). Specific text characteristics consist of typical types of figures displayed in the papers, and typical statements within each paper section. Data gathering consisted of collecting 15 student papers which constitute the population of study. An analytical method was designed to manage and analyze the text characteristics. It has three stages: identifying coding categories, re-formulating the categories, and configuring categories. Three important elements emerged that identified notable distinctions in paper quality: data display and use, narration of complex geological feature relationships, and overall organization of text structure. An inter-rater coding concordance check was conducted, and showed high concordance ratios for the coding of each section: Observations = 0.95; Interpretations = 0.93; and Conclusions = 0.87. These categories collectively reveal a larger pattern of general differences in the paper quality levels (high, low, medium). This variation in the quality of papers demonstrates degrees of systematic thoroughness, which is defined as how systematically each student engages in the tasks of the assignment, and how thoroughly and consistently the student follows through on that systematic commitment. Characterizations of each paper level indicate areas that can be explored to develop an explicit instructional pedagogy to support a greater number of low and medium level students. Implications suggest most students require greater explicit instruction. This involves having students pay more attention to detail, and to demonstrate greater follow through in order to produce a solid scientific argument in their technical papers.

  5. Writing content predicts benefit from written expressive disclosure: Evidence for repeated exposure and self-affirmation.

    PubMed

    Niles, Andrea N; Byrne Haltom, Kate E; Lieberman, Matthew D; Hur, Christopher; Stanton, Annette L

    2016-01-01

    Expressive disclosure regarding a stressful event improves psychological and physical health, yet predictors of these effects are not well established. The current study assessed exposure, narrative structure, affect word use, self-affirmation and discovery of meaning as predictors of anxiety, depressive and physical symptoms following expressive writing. Participants (N = 50) wrote on four occasions about a stressful event and completed self-report measures before writing and three months later. Essays were coded for stressor exposure (level of detail and whether participants remained on topic), narrative structure, self-affirmation and discovery of meaning. Linguistic Inquiry and Word Count software was used to quantify positive and negative affect word use. Controlling for baseline anxiety, more self-affirmation and detail about the event predicted lower anxiety symptoms, and more negative affect words (very high use) and more discovery of meaning predicted higher anxiety symptoms three months after writing. Findings highlight the importance of self-affirmation and exposure as predictors of benefit from expressive writing.

  6. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  7. Model-Driven Engineering: Automatic Code Generation and Beyond

    DTIC Science & Technology

    2015-03-01

    and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional

  8. Unsupervised Extraction of Diagnosis Codes from EMRs Using Knowledge-Based and Extractive Text Summarization Techniques

    PubMed Central

    Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel

    2017-01-01

    Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227

  9. Cross-cultural differences in mental representations of time: evidence from an implicit nonlinguistic task.

    PubMed

    Fuhrman, Orly; Boroditsky, Lera

    2010-11-01

    Across cultures people construct spatial representations of time. However, the particular spatial layouts created to represent time may differ across cultures. This paper examines whether people automatically access and use culturally specific spatial representations when reasoning about time. In Experiment 1, we asked Hebrew and English speakers to arrange pictures depicting temporal sequences of natural events, and to point to the hypothesized location of events relative to a reference point. In both tasks, English speakers (who read left to right) arranged temporal sequences to progress from left to right, whereas Hebrew speakers (who read right to left) arranged them from right to left, replicating previous work. In Experiments 2 and 3, we asked the participants to make rapid temporal order judgments about pairs of pictures presented one after the other (i.e., to decide whether the second picture showed a conceptually earlier or later time-point of an event than the first picture). Participants made responses using two adjacent keyboard keys. English speakers were faster to make "earlier" judgments when the "earlier" response needed to be made with the left response key than with the right response key. Hebrew speakers showed exactly the reverse pattern. Asking participants to use a space-time mapping inconsistent with the one suggested by writing direction in their language created interference, suggesting that participants were automatically creating writing-direction consistent spatial representations in the course of their normal temporal reasoning. It appears that people automatically access culturally specific spatial representations when making temporal judgments even in nonlinguistic tasks. Copyright © 2010 Cognitive Science Society, Inc.

  10. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  11. Design of Provider-Provisioned Website Protection Scheme against Malware Distribution

    NASA Astrophysics Data System (ADS)

    Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka

    Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.

  12. Should intellectual property be disseminated by "forwarding" rejected letters without permission?

    PubMed

    Gupta, V K

    1996-08-01

    Substantive scientific letter writing is a cost-effective mode of complementing observational and experimental research. The value of such philosophically uncommitted and unsponsored well-balanced scientific activity has been relegated. Critical letter writing entails the abilities to: maintain rational scepticism; refuse to conform in order to explain data; persist in keeping common sense centre-stage; exercise logic to evaluate the biological significance of mathematical figures, including statistics, and the ability to sustain the will to share insights regarding disease mechanisms on an ostensibly lower research platform. During peer review, innovative letter writing may share the occasionally unfortunate fate of innovative research. Rejected scientific letters do not automatically lose copyright. Periodicals with high letter loads will see some valuable contributions wasted, but that is the price for maintaining autonomy in scientific publication. The scientific community is an integrated whole that must respect the rights of authors at all levels. Unauthorised forwarding of rejected letters sets the dangerous precedent of justifying unjust means.

  13. The Impact of Rater Variability on Relationships among Different Effect-Size Indices for Inter-Rater Agreement between Human and Automated Essay Scoring

    ERIC Educational Resources Information Center

    Yun, Jiyeo

    2017-01-01

    Since researchers investigated automatic scoring systems in writing assessments, they have dealt with relationships between human and machine scoring, and then have suggested evaluation criteria for inter-rater agreement. The main purpose of my study is to investigate the magnitudes of and relationships among indices for inter-rater agreement used…

  14. Automatic Adaptation of Tunable Distributed Applications

    DTIC Science & Technology

    2001-01-01

    size, weight, and battery life, with a single CPU, less memory, smaller hard disk, and lower bandwidth network connectivity. The power of PDAs is...wireless, and bluetooth [32] facilities; thus achieving different rates of data transmission. 1 With the trend of “write once, run everywhere...applications, a single component can execute on multiple processors (or machines) in parallel. These parallel applications, written in a specialized language

  15. Studies in Historical Replication in Psychology IV: An Inquiry into the Psychological Research and Life of Gertrude Stein

    ERIC Educational Resources Information Center

    Sirrine, Nicole K.; McCarthy, Shauna K.

    2008-01-01

    Gertrude Stein (1874-1946) is well known as an early twentieth century writer, but less well known is her involvement in automatic writing research. Critics of Stein's literary works suggest that her research had a significant influence on her poetry and fiction, though Stein denied any influence. A partial replication of Stein's 1896 study was…

  16. Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    PubMed Central

    Bharioke, Arjun; Chklovskii, Dmitri B.

    2015-01-01

    Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884

  17. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  18. Reading Aloud Is Not Automatic: Processing Capacity Is Required to Generate a Phonological Code from Print

    ERIC Educational Resources Information Center

    Reynolds, Michael; Besner, Derek

    2006-01-01

    The present experiments tested the claim that phonological recoding occurs "automatically" by assessing whether it uses central attention in the context of the psychological refractory period paradigm. Task 1 was a tone discrimination task and Task 2 was reading aloud. The joint effects of long-lag word repetition priming and stimulus onset…

  19. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  20. [Coding Causes of Death with IRIS Software. Impact in Navarre Mortality Statistic].

    PubMed

    Floristán Floristán, Yugo; Delfrade Osinaga, Josu; Carrillo Prieto, Jesus; Aguirre Perez, Jesus; Moreno-Iribas, Conchi

    2016-08-02

    There are few studies that analyze changes in mortality statistics derived from the use of IRIS software, an automatic system for coding multiple causes of death and for the selection of the underlying cause of death, compared to manual coding. This study evaluated the impact of the use of IRIS in the Navarre mortality statistic. We proceeded to double coding 5,060 death certificates corresponding to residents in Navarra in 2014. We calculated coincidence between the two encodings for ICD10 chapters and for the list of causes of the Spanish National Statistics Institute (INE-102) and we estimated the change on mortality rates. IRIS automatically coded 90% of death certificates. The coincidence to 4 characters and in the same chapter of the CIE10 was 79.1% and 92.0%, respectively. Furthermore, coincidence with the short INE-102 list was 88.3%. Higher matches were found in death certificate of people under 65 years. In comparison with manual coding there was an increase in deaths from endocrine diseases (31%), mental disorders (19%) and disease of nervous system (9%), while a decrease of genitourinary system diseases was observed (21%). The coincidence at level of ICD10 chapters coding by IRIS in comparison to manual coding was 9 out of 10 deaths, similar to what is observed in other studies. The implementation of IRIS has led to increased of endocrine diseases, especially diabetes and hyperlipidaemia, and mental disorders, especially dementias.

  1. System for loading executable code into volatile memory in a downhole tool

    DOEpatents

    Hall, David R.; Bartholomew, David B.; Johnson, Monte L.

    2007-09-25

    A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.

  2. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  3. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  4. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  5. Automatic Rock Detection and Mapping from HiRISE Imagery

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Adams, Douglas S.; Cheng, Yang

    2008-01-01

    This system includes a C-code software program and a set of MATLAB software tools for statistical analysis and rock distribution mapping. The major functions include rock detection and rock detection validation. The rock detection code has been evolved into a production tool that can be used by engineers and geologists with minor training.

  6. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  7. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  8. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  9. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  10. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  11. Automatic contact in DYNA3D for vehicle crashworthiness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-15

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less

  12. COMPUTER DATA PROCESSING SYSTEM. PROJECT ROVER, 1962

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narin, F.

    ABS>A system was created for processing large volumes of data from Project ROVER tests at the Nevada Test Site. The data are compiled as analog, frequency modulated tape, which is translated in a Packard-Bell Tape-to-Tape converter into a binary coded decimal (BCD) IBM 7090 computer input tape. This input tape, tape A5, is processed on the 7090 by the RDH-D FORTRAN-II code and its 20 FAP and FORTRAN subroutines. Outputs from the 7090 run are tapes A3, which is a BCD tape used for listing on the IBM 1401 input-output computer, tape B5 which is a binary tape used asmore » input to a Stromberg-Carlson 40/20 cathode ray tube (CRT) plotter, and tape B6 which is a binary tape used for permanent data storage and input to specialized subcodes. The information on tape B5 commands the 40/20 to write grids, data points, and other information on the face of a CRT; the information on the CRT is photographed on 35 mm film which is subsequently developed; full-size (10" x 10") plots are made from the 35 mm film on a Xerox 1824 printer. The 7090 processes a data channel in approximately 4 seconds plus 4 seconds per plot to be made on the 40/20 for that channel. Up to 4500 data and calibration points on any one channel may be processed in one pass of the RDH-D code. This system has been used to produce more than 100,000 prints on the 1824 printer from more than 10,000 different 40/20 plots. At 00 per minute of 7090 time, it costs 60 to process a typical, 3-plot data channel on the 7090; each print on the 1824 costs between 5 and 10 cents including rental, supplies, and operator time. All automatic computer stops in the codes and subroutines are accompanied by on-line instructions to the operator. Extensive redundancy checking is incorporated in the FAP tape handling subroutines. (auth)« less

  13. Low latency and persistent data storage

    DOEpatents

    Fitch, Blake G; Franceschini, Michele M; Jagmohan, Ashish; Takken, Todd

    2014-11-04

    Persistent data storage is provided by a computer program product that includes computer program code configured for receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed.

  14. Impaired Retention of Motor Learning of Writing Skills in Patients with Parkinson’s Disease with Freezing of Gait

    PubMed Central

    Heremans, Elke; Nackaerts, Evelien; Vervoort, Griet; Broeder, Sanne; Swinnen, Stephan P.; Nieuwboer, Alice

    2016-01-01

    Background Patients with Parkinson’s disease (PD) and freezing of gait (FOG) suffer from more impaired motor and cognitive functioning than their non-freezing counterparts. This underlies an even higher need for targeted rehabilitation programs in this group. However, so far it is unclear whether FOG affects the ability for consolidation and generalization of motor learning and thus the efficacy of rehabilitation. Objective To investigate the hallmarks of motor learning in people with FOG compared to those without by comparing the effects of an intensive motor learning program to improve handwriting. Methods Thirty five patients with PD, including 19 without and 16 with FOG received six weeks of handwriting training consisting of exercises provided on paper and on a touch-sensitive writing tablet. Writing training was based on single- and dual-task writing and was supported by means of visual target zones. To investigate automatization, generalization and retention of learning, writing performance was assessed before and after training in the presence and absence of cues and dual tasking and after a six-week retention period. Writing amplitude was measured as primary outcome measure and variability of writing and dual-task accuracy as secondary outcomes. Results Significant learning effects were present on all outcome measures in both groups, both for writing under single- and dual-task conditions. However, the gains in writing amplitude were not retained after a retention period of six weeks without training in the patient group without FOG. Furthermore, patients with FOG were highly dependent on the visual target zones, reflecting reduced generalization of learning in this group. Conclusions Although short-term learning effects were present in both groups, generalization and retention of motor learning were specifically impaired in patients with PD and FOG. The results of this study underscore the importance of individualized rehabilitation protocols. PMID:26862915

  15. The art of writing good research proposals.

    PubMed

    van Ekelenburg, Henk

    2010-01-01

    Whilst scientists are by default motivated by intellectual challenges linked to the area of their interest rather than have an interest in the financial component related to their work, the reality of today is that funding for their work does not come automatically More and more governments provide project-related funding rather than multipurpose funding that covers the total annual costs of a research performing entity (such as a university department). So, like it or not, researchers have to present their research ideas and convince funding bodies about the usefulness and importance of their intended research work. Writing the research proposal is not simply typing words and punctuation. It requires succinctly and clearly chronicling the facts, as well as crafting a convincing line of reasoning for funding the project. For the best result, both the logical, verbal left side of the brain and the intuitive, creative right side of the brain need to work as a team. This article covers the process of writing a proposal, from research idea to submission to the funding body. The key to good writing is linking the text into a logical project flow. Therefore, in the early stage of writing an RTD proposal, developing the chain of reasoning and creating a flow chart is recommended to get a clear overview of the entire project and to visualise how the many work packages are connected.

  16. Programming in HAL/S

    NASA Technical Reports Server (NTRS)

    Ryer, M. J.

    1978-01-01

    HAL/S is a computer programming language; it is a representation for algorithms which can be interpreted by either a person or a computer. HAL/S compilers transform blocks of HAL/S code into machine language which can then be directly executed by a computer. When the machine language is executed, the algorithm specified by the HAL/S code (source) is performed. This document describes how to read and write HAL/S source.

  17. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  18. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  19. Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.

    1972-01-01

    A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.

  20. Design of efficient and simple interface testing equipment for opto-electric tracking system

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao

    2016-10-01

    Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.

  1. Natural Language Interface for Safety Certification of Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2011-01-01

    Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.

  2. The integration of lexical, syntactic, and discourse features in bilingual adolescents' writing: an exploratory approach.

    PubMed

    Danzak, Robin L

    2011-10-01

    The purpose of this study was to assess the bilingual writing of adolescent English language learners (ELLs) using quantitative tools. Linguistic measures were applied to the participants' writing at the lexical, syntactic, and discourse levels, with the goal of comparing outcomes at each of these levels across languages (Spanish/English) and genres (expository/narrative). Twenty Spanish-speaking ELLs, ages 11-14 years, each produced 8 expository and narrative autobiographical texts. Texts were coded and scored for lexical sophistication, syntactic complexity, and overall text quality. Scores were analyzed using Friedman's 2-way analysis of variance by ranks (Siegel & Castellan, 1988); resulting ranks were compared across languages and genre topics. The text topic impacted rank differences at all levels. Performance at the three levels was similar across languages, indicating that participants were emerging writers in both Spanish and English. The impact of genre was generally inconsequential at all levels. Similar results across languages implied the potential transfer of writing skills. Overall, students appeared to apply a knowledge-telling strategy to writing rather than strategically planning, composing, and revising their writing. Finally, outcomes highlighted the synergistic relationships among linguistic levels in text composition, indicating a need to address the interaction of vocabulary, morphosyntax, and text-level structures in the instruction and assessment of ELL writing.

  3. filltex: Automatic queries to ADS and INSPIRE databases to fill LaTex bibliography

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Vallisneri, Michele

    2017-05-01

    filltex is a simple tool to fill LaTex reference lists with records from the ADS and INSPIRE databases. ADS and INSPIRE are the most common databases used among the theoretical physics and astronomy scientific communities, respectively. filltex automatically looks for all citation labels present in a tex document and, by means of web-scraping, downloads all the required citation records from either of the two databases. filltex significantly speeds up the LaTex scientific writing workflow, as all required actions (compile the tex file, fill the bibliography, compile the bibliography, compile the tex file again) are automated in a single command. We also provide an integration of filltex for the macOS LaTex editor TexShop.

  4. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button

    PubMed Central

    2010-01-01

    Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979

  5. How Does Reading Performance Modulate the Impact of Orthographic Knowledge on Speech Processing? A Comparison of Normal Readers and Dyslexic Adults

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Nelis, Aubéline; Kolinsky, Régine

    2014-01-01

    Studies on proficient readers showed that speech processing is affected by knowledge of the orthographic code. Yet, the automaticity of the orthographic influence depends on task demand. Here, we addressed this automaticity issue in normal and dyslexic adult readers by comparing the orthographic effects obtained in two speech processing tasks that…

  6. DoD Is Not Properly Monitoring the Initiation of Maintenance for Facilities at Kandahar Airfield, Afghanistan (REDACTED)

    DTIC Science & Technology

    2013-09-30

    fire sprinkler system during the initial construction of the RSOI facilities. The construction contract to build the RSOI...International Building Code. Compliant manual and automatic fire alarm and notification systems , portable fire extinguishers, fire sprinkler systems ...automatic fire sprinkler system that was not operational, a fire department connection that was obstructed, and a fire detection system

  7. Identifying Key Features of Effective Active Learning: The Effects of Writing and Peer Discussion

    PubMed Central

    Pangle, Wiline M.; Wyatt, Kevin H.; Powell, Karli N.; Sherwood, Rachel E.

    2014-01-01

    We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. PMID:25185230

  8. SETI-EC: SETI Encryption Code

    NASA Astrophysics Data System (ADS)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  9. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The interactive code allows one to delete long CR signatures and prevents removal of false CR signatures near the edge of the nucleus of the comet. The interactive code can be applied to editing any digital images. Results obtained can be used for other missions to comets.

  10. Project MAC Progress Report 11

    DTIC Science & Technology

    1974-12-01

    whether a subroutine would be useful as a part of some larger program , and, if so, how to use it [8]. The programming methodology employed by CALICO...7. Seriff, Marc, How to Write Programs for the CALICO Environment. SYS. 14.04 (unpublished). 8. Reeve, Chris, Marty Draper, D. E. Burmaster, and J...Introduction Automatic Programming Group A. Introduction B. Understanding How a User Might Interact with a Knowledge-Based Application System C

  11. Automatically Detecting Authors’ Native Language

    DTIC Science & Technology

    2011-03-01

    exploring stylistic idiosyncrasies in the author’s writing [15]. Kop- pel used the data from International Corpus of Learner English version 1, which is... stylistic feature sets such as function words, letter n-grams, and er- rors and idiosyncrasies [15]. 1. Function words: 400 specific function words were...language on the choice of written second language words. Proceedings of the Workshop on Cognitive Aspects of Computation Language Acquisition, pp. 9–16

  12. Identifying Speech Acts in E-Mails: Toward Automated Scoring of the "TOEIC"® E-Mail Task. Research Report. ETS RR-12-16

    ERIC Educational Resources Information Center

    De Felice, Rachele; Deane, Paul

    2012-01-01

    This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…

  13. Code subspaces for LLM geometries

    NASA Astrophysics Data System (ADS)

    Berenstein, David; Miller, Alexandra

    2018-03-01

    We consider effective field theory around classical background geometries with a gauge theory dual, specifically those in the class of LLM geometries. These are dual to half-BPS states of N= 4 SYM. We find that the language of code subspaces is natural for discussing the set of nearby states, which are built by acting with effective fields on these backgrounds. This work extends our previous work by going beyond the strict infinite N limit. We further discuss how one can extract the topology of the state beyond N→∞ and find that, as before, uncertainty and entanglement entropy calculations provide a useful tool to do so. Finally, we discuss obstructions to writing down a globally defined metric operator. We find that the answer depends on the choice of reference state that one starts with. Therefore, within this setup, there is ambiguity in trying to write an operator that describes the metric globally.

  14. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  15. Teaching Theory Construction With Initial Grounded Theory Tools: A Reflection on Lessons and Learning.

    PubMed

    Charmaz, Kathy

    2015-12-01

    This article addresses criticisms of qualitative research for spawning studies that lack analytic development and theoretical import. It focuses on teaching initial grounded theory tools while interviewing, coding, and writing memos for the purpose of scaling up the analytic level of students' research and advancing theory construction. Adopting these tools can improve teaching qualitative methods at all levels although doctoral education is emphasized here. What teachers cover in qualitative methods courses matters. The pedagogy presented here requires a supportive environment and relies on demonstration, collective participation, measured tasks, progressive analytic complexity, and accountability. Lessons learned from using initial grounded theory tools are exemplified in a doctoral student's coding and memo-writing excerpts that demonstrate progressive analytic development. The conclusion calls for increasing the number and depth of qualitative methods courses and for creating a cadre of expert qualitative methodologists. © The Author(s) 2015.

  16. Some User's Insights Into ADIFOR 2.0D

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.

    2002-01-01

    Some insights are given which were gained by one user through experience with the use of the ADIFOR 2.0D software for automatic differentiation of Fortran code. These insights are generally in the area of the user interface with the generated derivative code - particularly the actual form of the interface and the use of derivative objects, including "seed" matrices. Some remarks are given as to how to iterate application of ADIFOR in order to generate second derivative code.

  17. Introduction to the Natural Anticipator and the Artificial Anticipator

    NASA Astrophysics Data System (ADS)

    Dubois, Daniel M.

    2010-11-01

    This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from "pro-gram" meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper. All the cells in the body, and the neurons in the brain, are programmed by the anticipatory genetic code, DNA, in a low-level language with four signs. The programs in computers are also computing anticipatory systems. It will be shown, at one hand, that the genetic code DNA is a natural anticipator. As demonstrated by Nobel laureate McClintock [8], genomes are programmed. The fundamental program deals with the DNA genetic code. The properties of the DNA consist in self-replication and self-modification. The self-replicating process leads to reproduction of the species, while the self-modifying process leads to new species or evolution and adaptation in existing ones. The genetic code DNA keeps its instructions in memory in the DNA coding molecule. The genetic code DNA is a rewriting system, from DNA coding to DNA template molecule. The DNA template molecule is a rewriting system to the Messenger RNA molecule. The information is not destroyed during the execution of the rewriting program. On the other hand, it will be demonstrated that Turing machine is an artificial anticipator. The Turing machine is a rewriting system. The head reads and writes, modifying the content of the tape. The information is destroyed during the execution of the program. This is an irreversible process. The input data are lost.

  18. Coding hazardous tree failures for a data management system

    Treesearch

    Lee A. Paine

    1978-01-01

    Codes for automatic data processing (ADP) are provided for hazardous tree failure data submitted on Report of Tree Failure forms. Definitions of data items and suggestions for interpreting ambiguously worded reports are also included. The manual is intended to insure the production of accurate and consistent punched ADP cards which are used in transfer of the data to...

  19. Frequency-Accommodating Manchester Decoder

    NASA Technical Reports Server (NTRS)

    Vasquez, Mario J.

    1988-01-01

    No adjustment necessary to cover a 10:1 frequency range. Decoding circuit converts biphase-level pulse-code modulation to nonreturn-to-zero (NRZ)-level pulse-code modulation plus clock signal. Circuit accommodates input data rate of 50 to 500 kb/s. Tracks gradual changes in rate automatically, eliminating need for extra circuits and manual switching to adjust to different rates.

  20. Automatic detection of white-light flare kernels in SDO/HMI intensitygrams

    NASA Astrophysics Data System (ADS)

    Mravcová, Lucia; Švanda, Michal

    2017-11-01

    Solar flares with a broadband emission in the white-light range of the electromagnetic spectrum belong to most enigmatic phenomena on the Sun. The origin of the white-light emission is not entirely understood. We aim to systematically study the visible-light emission connected to solar flares in SDO/HMI observations. We developed a code for automatic detection of kernels of flares with HMI intensity brightenings and study properties of detected candidates. The code was tuned and tested and with a little effort, it could be applied to any suitable data set. By studying a few flare examples, we found indication that HMI intensity brightening might be an artefact of the simplified procedure used to compute HMI observables.

  1. Handwriting training in Parkinson’s disease: A trade-off between size, speed and fluency

    PubMed Central

    Broeder, Sanne; Pereira, Marcelo P.; Swinnen, Stephan P.; Vandenberghe, Wim; Nieuwboer, Alice; Heremans, Elke

    2017-01-01

    Background In previous work, we found that intensive amplitude training successfully improved micrographia in Parkinson’s disease (PD). Handwriting abnormalities in PD also express themselves in stroke duration and writing fluency. It is currently unknown whether training changes these dysgraphic features. Objective To determine the differential effects of amplitude training on various hallmarks of handwriting abnormalities in PD. Methods We randomized 38 right-handed subjects in early to mid-stage of PD into an experimental group (n = 18), receiving training focused at improving writing size during 30 minutes/day, five days/week for six weeks, and a placebo group (n = 20), receiving stretch and relaxation exercises at equal intensity. Writing skills were assessed using a touch-sensitive tablet pre- and post-training, and after a six-week retention period. Tests encompassed a transfer task, evaluating trained and untrained sequences, and an automatization task, comparing single- and dual-task handwriting. Outcome parameters were stroke duration (s), writing velocity (cm/s) and normalized jerk (i.e. fluency). Results In contrast to the reported positive effects of training on writing size, the current results showed increases in stroke duration and normalized jerk after amplitude training, which were absent in the placebo group. These increases remained after the six-week retention period. In contrast, velocity remained unchanged throughout the study. Conclusion While intensive amplitude training is beneficial to improve writing size in PD, it comes at a cost as fluency and stroke duration deteriorated after training. The findings imply that PD patients can redistribute movement priorities after training within a compromised motor system. PMID:29272301

  2. Handwriting training in Parkinson's disease: A trade-off between size, speed and fluency.

    PubMed

    Nackaerts, Evelien; Broeder, Sanne; Pereira, Marcelo P; Swinnen, Stephan P; Vandenberghe, Wim; Nieuwboer, Alice; Heremans, Elke

    2017-01-01

    In previous work, we found that intensive amplitude training successfully improved micrographia in Parkinson's disease (PD). Handwriting abnormalities in PD also express themselves in stroke duration and writing fluency. It is currently unknown whether training changes these dysgraphic features. To determine the differential effects of amplitude training on various hallmarks of handwriting abnormalities in PD. We randomized 38 right-handed subjects in early to mid-stage of PD into an experimental group (n = 18), receiving training focused at improving writing size during 30 minutes/day, five days/week for six weeks, and a placebo group (n = 20), receiving stretch and relaxation exercises at equal intensity. Writing skills were assessed using a touch-sensitive tablet pre- and post-training, and after a six-week retention period. Tests encompassed a transfer task, evaluating trained and untrained sequences, and an automatization task, comparing single- and dual-task handwriting. Outcome parameters were stroke duration (s), writing velocity (cm/s) and normalized jerk (i.e. fluency). In contrast to the reported positive effects of training on writing size, the current results showed increases in stroke duration and normalized jerk after amplitude training, which were absent in the placebo group. These increases remained after the six-week retention period. In contrast, velocity remained unchanged throughout the study. While intensive amplitude training is beneficial to improve writing size in PD, it comes at a cost as fluency and stroke duration deteriorated after training. The findings imply that PD patients can redistribute movement priorities after training within a compromised motor system.

  3. Focus of attention and automaticity in handwriting.

    PubMed

    MacMahon, Clare; Charness, Neil

    2014-04-01

    This study investigated the nature of automaticity in everyday tasks by testing handwriting performance under single and dual-task conditions. Item familiarity and hand dominance were also manipulated to understand both cognitive and motor components of the task. In line with previous literature, performance was superior in an extraneous focus of attention condition compared to two different skill focus conditions. This effect was found only when writing with the dominant hand. In addition, performance was superior for high familiarity compared to low familiarity items. These findings indicate that motor and cognitive familiarity are related to the degree of automaticity of motor skills and can be manipulated to produce different performance outcomes. The findings also imply that the progression of skill acquisition from novel to novice to expert levels can be traced using different dual-task conditions. The separation of motor and cognitive familiarity is a new approach in the handwriting domain, and provides insight into the nature of attentional demands during performance. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Automatically producing tailored web materials for public administration

    NASA Astrophysics Data System (ADS)

    Colineau, Nathalie; Paris, Cécile; Vander Linden, Keith

    2013-06-01

    Public administration organizations commonly produce citizen-focused, informational materials describing public programs and the conditions under which citizens or citizen groups are eligible for these programs. The organizations write these materials for generic audiences because of the excessive human resource costs that would be required to produce personalized materials for everyone. Unfortunately, generic materials tend to be longer and harder to understand than materials tailored for particular citizens. Our work explores the feasibility and effectiveness of automatically producing tailored materials. We have developed an adaptive hypermedia application system that automatically produces tailored informational materials and have evaluated it in a series of studies. The studies demonstrate that: (1) subjects prefer tailored materials over generic materials, even if the tailoring requires answering a set of demographic questions first; (2) tailored materials are more effective at supporting subjects in their task of learning about public programs; and (3) the time required to specify the demographic information on which the tailoring is based does not significantly slow down the subjects in their information seeking task.

  5. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  6. Retrieving definitional content for ontology development.

    PubMed

    Smith, L; Wilbur, W J

    2004-12-01

    Ontology construction requires an understanding of the meaning and usage of its encoded concepts. While definitions found in dictionaries or glossaries may be adequate for many concepts, the actual usage in expert writing could be a better source of information for many others. The goal of this paper is to describe an automated procedure for finding definitional content in expert writing. The approach uses machine learning on phrasal features to learn when sentences in a book contain definitional content, as determined by their similarity to glossary definitions provided in the same book. The end result is not a concise definition of a given concept, but for each sentence, a predicted probability that it contains information relevant to a definition. The approach is evaluated automatically for terms with explicit definitions, and manually for terms with no available definition.

  7. Tectonic Summaries for Web-served Earthquake Responses, Southeastern North America

    USGS Publications Warehouse

    Wheeler, Russell L.

    2003-01-01

    This report documents the rationale and strategy used to write short summaries of the seismicity and tectonic settings of domains in southeastern North America. The summaries are used in automated responses to notable earthquakes that occur anywhere east of the Rocky Mountains in the United States or Canada. Specifically, the report describes the geologic and tectonic information, data sources, criteria, and reasoning used to determine the content and format of the summaries, for the benefit of geologists or seismologists who may someday need to revise the summaries or write others. These tectonic summaries are designed to be automatically posted on the World Wide Web as soon as an earthquake?s epicenter is determined. The summaries are part of a larger collection of summaries that is planned to cover the world.

  8. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    USGS Publications Warehouse

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  9. "It's hard to plan your day when you have no money": discouraged workers' occupational possibilities and the need to reconceptualize routine.

    PubMed

    Aldrich, Rebecca M; Dickie, Virginia A

    2013-01-01

    This paper presents daily routine as a justice-related concern for unemployed people, based on an ethnographic study of discouraged workers. Four women and one man who wanted to work but had ceased searching for jobs, and 25 community members whose jobs served the unemployed community, participated in the study. Ethnographic methodology--including participant observation, semi-structured and unstructured interviews, and document reviews--and the Occupational Questionnaire were used to gather data for 10 months in a rural North Carolina town. Data analysis included open and focused coding via the Atlas.ti software as well as participant review of findings and writings. Routines need to be seen as negotiated, resource-driven products of experience rather than automatic structures for daily living. Scholars and practitioners must acknowledge that the presence or absence of routine not only relates to resource use but also influences unemployed people's occupational possibilities. To address unjust expectations about unemployed people's occupational possibilities, scholars must examine the uncertain, negotiated nature of daily routine and its function as a foundation for occupational engagement. Thus, it may be helpful to view routine as both a prerequisite of occupation and a way that existing occupations are organized.

  10. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer's accuracy of 93% and a user's accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  11. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  12. Maclisp extensions

    NASA Technical Reports Server (NTRS)

    Bawden, A.; Burke, G. S.; Hoffman, C. W.

    1981-01-01

    A common subset of selected facilities available in Maclisp and its derivatives (PDP-10 and Multics Maclisp, Lisp Machine Lisp (Zetalisp), and NIL) is decribed. The object is to add in writing code which can run compatibly in more than one of these environments.

  13. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  14. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  15. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE PAGES

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; ...

    2017-03-06

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  16. Composing Data Parallel Code for a SPARQL Graph Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less

  17. Effect of normal aging and of Alzheimer's disease on, episodic memory.

    PubMed

    Le Moal, S; Reymann, J M; Thomas, V; Cattenoz, C; Lieury, A; Allain, H

    1997-01-01

    Performances of 12 patients with Alzheimer's disease (AD), 15 healthy elderly subjects and 20 young healthy volunteers were compared on two episodic memory tests. The first, a learning test of semantically related words, enabled an assessment of the effect of semantic relationships on word learning by controlling the encoding and retrieval processes. The second, a dual coding test, is about the assessment of automatic processes operating during drawings encoding. The results obtained demonstrated quantitative and qualitative differences between the population. Manifestations of episodic memory deficit in AD patients were shown not only by lower performance scores than in elderly controls, but also by the lack of any effect of semantic cues and the production of a large number of extra-list intrusions. Automatic processes underlying dual coding appear to be spared in AD, although more time is needed to process information than in young or elderly subjects. These findings confirm former data and emphasize the preservation of certain memory processes (dual coding) in AD which could be used in future therapeutic approaches.

  18. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. Bamboo reformulates MPI source into the form of a task dependency graph that expresses a partial ordering among tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotationmore » for a variety of applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo's performance meets or exceeds that of labor-intensive hand coding. The translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a wellknown library.« less

  19. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  20. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  1. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  2. Jupyter Notebooks as tools for interactive learning of Concepts in Structural Geology and efficient grading of exercises.

    NASA Astrophysics Data System (ADS)

    Niederau, Jan; Wellmann, Florian; Maersch, Jannik; Urai, Janos

    2017-04-01

    Programming is increasingly recognised an important skill for geoscientists - however, the hurdle to jump into programming for students with little or no experience can be high. We present here teaching concepts on the basis of Jupyter notebooks that combine, in an intuitive way, formatted instruction text with code cells in a single environment. This integration allows for an exposure to programming on several levels: from a complete interactive presentation of content, where students require no or very limited programming experience, to highly complex geoscientific computations. We consider these notebooks therefore as an ideal medium to present computational content to students in the field of geosciences. We show here how we use these notebooks to develop digital documents in Python for undergrad-students, who can then learn about basic concepts in structural geology via self-assessment. Such notebooks comprise concepts such as: stress tensor, strain ellipse, or the mohr circle. Students can interactively change parameters, e.g. by using sliders and immediately see the results. They can further experiment and extend the notebook by writing their own code within the notebook. Jupyter Notebooks for teaching purposes can be provided ready-to-use via online services. That is, students do not need to install additional software on their devices in order to work with the notebooks. We also use Jupyter Notebooks for automatic grading of programming assignments in multiple lectures. An implemented workflow facilitates the generation, distribution of assignments, as well as the final grading. Compared to previous grading methods with a high percentage of repetitive manual grading, the implemented workflow proves to be much more time efficient.

  3. Turbulence modeling for hypersonic flight

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge E.

    1992-01-01

    The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.

  4. Conversion of the agent-oriented domain-specific language ALAS into JavaScript

    NASA Astrophysics Data System (ADS)

    Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana

    2016-06-01

    This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.

  5. Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems

    PubMed Central

    Shinozaki, Takahiro

    2018-01-01

    Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data. PMID:29425248

  6. Writing Problems in Developmental Dyslexia: Under-Recognized and Under-Treated2,3

    PubMed Central

    Berninger, Virginia W.; Nielsen, Kathleen H.; Abbott, Robert D.; Wijsman, Ellen; Raskind, Wendy

    2008-01-01

    The International Dyslexia Association defines dyslexia as unexpected problems of neurobiological origin in accuracy and rate of oral reading of single real words, single pseudowords, or text or of written spelling. However, prior research has focused more on the reading than the spelling problems of students with dyslexia. A test battery was administered to 122 children who met inclusion criteria for dyslexia and qualified their families for participation in a family genetics study that has been ongoing for over a decade. Their parents completed the same test battery. Although a past structural equation modeling study of typically developing children identified a significant path from handwriting to composition quality, the current structural equation modeling study identified a significant path from spelling to composition for children and their parents with dyslexia. Grapho-motor planning did not contribute uniquely to their composition, showing that writing is not just a motor skill. Students with dyslexia do have a problem in automatic letter writing and naming, which was related to impaired inhibition and verbal fluency, and may explain their spelling problems. Results are discussed in reference to the importance of providing explicit instruction in the phonological, orthographic, and morphological processes of spelling and in composition to students with dyslexia and not only offering accommodation for their writing problems. PMID:18438452

  7. What are they thinking? Automated analysis of student writing about acid-base chemistry in introductory biology.

    PubMed

    Haudek, Kevin C; Prevost, Luanna B; Moscarella, Rosa A; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students' writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid-base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses.

  8. What Are They Thinking? Automated Analysis of Student Writing about Acid–Base Chemistry in Introductory Biology

    PubMed Central

    Haudek, Kevin C.; Prevost, Luanna B.; Moscarella, Rosa A.; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students’ writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid–base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses. PMID:22949425

  9. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  10. Preparing a collection of radiology examinations for distribution and retrieval.

    PubMed

    Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B; Shooshan, Sonya E; Rodriguez, Laritza; Antani, Sameer; Thoma, George R; McDonald, Clement J

    2016-03-01

    Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.

  11. Automatic interpretation and writing report of the adult waking electroencephalogram.

    PubMed

    Shibasaki, Hiroshi; Nakamura, Masatoshi; Sugi, Takenao; Nishida, Shigeto; Nagamine, Takashi; Ikeda, Akio

    2014-06-01

    Automatic interpretation of the EEG has so far been faced with significant difficulties because of a large amount of spatial as well as temporal information contained in the EEG, continuous fluctuation of the background activity depending on changes in the subject's vigilance and attention level, the occurrence of paroxysmal activities such as spikes and spike-and-slow-waves, contamination of the EEG with a variety of artefacts and the use of different recording electrodes and montages. Therefore, previous attempts of automatic EEG interpretation have been focussed only on a specific EEG feature such as paroxysmal abnormalities, delta waves, sleep stages and artefact detection. As a result of a long-standing cooperation between clinical neurophysiologists and system engineers, we report for the first time on a comprehensive, computer-assisted, automatic interpretation of the adult waking EEG. This system analyses the background activity, intermittent abnormalities, artefacts and the level of vigilance and attention of the subject, and automatically presents its report in written form. Besides, it also detects paroxysmal abnormalities and evaluates the effects of intermittent photic stimulation and hyperventilation on the EEG. This system of automatic EEG interpretation was formed by adopting the strategy that the qualified EEGers employ for the systematic visual inspection. This system can be used as a supplementary tool for the EEGer's visual inspection, and for educating EEG trainees and EEG technicians. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Proceedings of the U.S. Army Symposium on Gun Dynamics (5th) Held in Rensselaerville, New York on 23-25 September 1987

    DTIC Science & Technology

    1987-09-01

    have shown that gun barrel heating, and hence thermal expansion , is both axially and circumferentially asymmetric. Circumferential, or cross-barrel...element code, which ended in the selection of ABAQUS . The code will perform static, dynamic, and thermal anal- ysis on a broad range of structures...analysis may be performed by a user supplied FORTRAN subroutine which is automatically linked to the code and supplements the stand- ard ABAQUS

  13. Infrastructure for Rapid Development of Java GUI Programs

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Hostetter, Carl F.; Wheeler, Philip

    2006-01-01

    The Java Application Shell (JAS) is a software framework that accelerates the development of Java graphical-user-interface (GUI) application programs by enabling the reuse of common, proven GUI elements, as distinguished from writing custom code for GUI elements. JAS is a software infrastructure upon which Java interactive application programs and graphical user interfaces (GUIs) for those programs can be built as sets of plug-ins. JAS provides an application- programming interface that is extensible by application-specific plugins that describe and encapsulate both specifications of a GUI and application-specific functionality tied to the specified GUI elements. The desired GUI elements are specified in Extensible Markup Language (XML) descriptions instead of in compiled code. JAS reads and interprets these descriptions, then creates and configures a corresponding GUI from a standard set of generic, reusable GUI elements. These elements are then attached (again, according to the XML descriptions) to application-specific compiled code and scripts. An application program constructed by use of JAS as its core can be extended by writing new plug-ins and replacing existing plug-ins. Thus, JAS solves many problems that Java programmers generally solve anew for each project, thereby reducing development and testing time.

  14. Northwest range-plant symbols adapted to automatic data processing.

    Treesearch

    George A. Garrison; Jon M. Skovlin

    1960-01-01

    Many range technicians, agronomists, foresters, biologists, and botanists of various educational institutions and government agencies in the Northwest have been using a four-letter symbol list or code compiled 12 years ago from records of plants collected by the U.S. Forest Service in Oregon and Washington, This code has served well as a means of entering plant names...

  15. Kinetic modelling of the oxidation of large aliphatic hydrocarbons using an automatic mechanism generation.

    PubMed

    Muharam, Yuswan; Warnatz, Jürgen

    2007-08-21

    A mechanism generator code to automatically generate mechanisms for the oxidation of large hydrocarbons has been successfully modified and considerably expanded in this work. The modification was through (1) improvement of the existing rules such as cyclic-ether reactions and aldehyde reactions, (2) inclusion of some additional rules to the code, such as ketone reactions, hydroperoxy cyclic-ether formations and additional reactions of alkenes, (3) inclusion of small oxygenates, produced by the code but not included in the handwritten C(1)-C(4) sub-mechanism yet, to the handwritten C(1)-C(4) sub-mechanism. In order to evaluate mechanisms generated by the code, simulations of observed results in different experimental environments have been carried out. Experimentally derived and numerically predicted ignition delays of n-heptane-air and n-decane-air mixtures in high-pressure shock tubes in a wide range of temperatures, pressures and equivalence ratios agree very well. Concentration profiles of the main products and intermediates of n-heptane and n-decane oxidation in jet-stirred reactors at a wide range of temperatures and equivalence ratios are generally well reproduced. In addition, the ignition delay times of different normal alkanes was numerically studied.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Profile Interface Generator (PIG) is a tool for loosely coupling applications and performance tools. It enables applications to write code that looks like standard C and Fortran functions calls, without requiring that applications link to specific implementations of those function calls. Performance tools can register with PIG in order to listen to only the calls that give information they care about. This interface reduces the build and configuration burden on application developers and allows semantic instrumentation to live in production codes without interfering with production runs.

  17. Writing and applications of fiber Bragg grating arrays

    NASA Astrophysics Data System (ADS)

    LaRochelle, Sophie; Cortes, Pierre-Yves; Fathallah, H.; Rusch, Leslie A.; Jaafar, H. B.

    2000-12-01

    Multiple Bragg gratings are written in a single fibre strand with accurate positioning to achieve predetermined time delays between optical channels. Applications of fibre Bragg grating arrays include encoders/decoders with series of identical gratings for optical code-division multiple access.

  18. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: General formulation and removal of singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ndong, Mamadou; Lauvergnat, David; Nauts, André

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less

  19. Specifications and programs for computer software validation

    NASA Technical Reports Server (NTRS)

    Browne, J. C.; Kleir, R.; Davis, T.; Henneman, M.; Haller, A.; Lasseter, G. L.

    1973-01-01

    Three software products developed during the study are reported and include: (1) FORTRAN Automatic Code Evaluation System, (2) the Specification Language System, and (3) the Array Index Validation System.

  20. Writing on wet paper

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David

    2005-03-01

    In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.

  1. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  2. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  3. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  4. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  5. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  6. Breaking Away

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2007-01-01

    This article discusses open source projects which may free universities from expensive, rigid commercial software. But will the rewards outweigh the potential risks? The Kuali Project involves multiple universities writing and sharing code for their financial and operational systems. Another, the Sakai Project, is a community source platform for…

  7. 7 CFR 400.767 - Requester obligations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....gov; or (iv) By overnight delivery to the Associate Administrator, Risk Management Agency, United... subpart must: (1) Be submitted: (i) In writing by certified mail, to the Associate Administrator, Risk Management Agency, United States Department of Agriculture, Stop Code 0801, 1400 Independence Avenue, SW...

  8. Professional Growth.

    ERIC Educational Resources Information Center

    Cook, Jimmie

    1996-01-01

    Claims that reading and writing are closely related at all grade levels. Points out that reading aloud; sharing quality children's literature; and incorporating activities such as recitation, singing, and poetry can facilitate the transition from oral to written language codes. Proves that teacher participation in such activities can encourage…

  9. 7 CFR 400.767 - Requester obligations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....gov; or (iv) By overnight delivery to the Associate Administrator, Risk Management Agency, United... subpart must: (1) Be submitted: (i) In writing by certified mail, to the Associate Administrator, Risk Management Agency, United States Department of Agriculture, Stop Code 0801, 1400 Independence Avenue, SW...

  10. Higher-order automatic differentiation of mathematical functions

    NASA Astrophysics Data System (ADS)

    Charpentier, Isabelle; Dal Cappello, Claude

    2015-04-01

    Functions of mathematical physics such as the Bessel functions, the Chebyshev polynomials, the Gauss hypergeometric function and so forth, have practical applications in many scientific domains. On the one hand, differentiation formulas provided in reference books apply to real or complex variables. These do not account for the chain rule. On the other hand, based on the chain rule, the automatic differentiation has become a natural tool in numerical modeling. Nevertheless automatic differentiation tools do not deal with the numerous mathematical functions. This paper describes formulas and provides codes for the higher-order automatic differentiation of mathematical functions. The first method is based on Faà di Bruno's formula that generalizes the chain rule. The second one makes use of the second order differential equation they satisfy. Both methods are exemplified with the aforementioned functions.

  11. Automatic Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.

  12. Sifting, sorting and saturating data in a grounded theory study of information use by practice nurses: a worked example.

    PubMed

    Hoare, Karen J; Mills, Jane; Francis, Karen

    2012-12-01

    The terminology used to analyse data in a grounded theory study can be confusing. Different grounded theorists use a variety of terms which all have similar meanings. In the following study, we use terms adopted by Charmaz including: initial, focused and axial coding. Initial codes are used to analyse data with an emphasis on identifying gerunds, a verb acting as a noun. If initial codes are relevant to the developing theory, they are grouped with similar codes into categories. Categories become saturated when there are no new codes identified in the data. Axial codes are used to link categories together into a grounded theory process. Memo writing accompanies this data sifting and sorting. The following article explains how one initial code became a category providing a worked example of the grounded theory method of constant comparative analysis. The interplay between coding and categorization is facilitated by the constant comparative method. © 2012 Wiley Publishing Asia Pty Ltd.

  13. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  14. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  15. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  16. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  17. Crustal Fracturing Field and Presence of Fluid as Revealed by Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Pastori, M.; Piccinini, D.; de Gori, P.; Margheriti, L.; Barchi, M. R.; di Bucci, D.

    2010-12-01

    In the last three years, we developed, tested and improved an automatic analysis code (Anisomat+) to calculate the shear wave splitting parameters, fast polarization direction (φ) and delay time (∂t). The code is a set of MatLab scripts able to retrieve crustal anisotropy parameters from three-component seismic recording of local earthquakes using horizontal component cross-correlation method. The analysis procedure consists in choosing an appropriate frequency range, that better highlights the signal containing the shear waves, and a length of time window on the seismogram centered on the S arrival (the temporal window contains at least one cycle of S wave). The code was compared to other two automatic analysis code (SPY and SHEBA) and tested on three Italian areas (Val d’Agri, Tiber Valley and L’Aquila surrounding) along the Apennine mountains. For each region we used the anisotropic parameters resulting from the automatic computation as a tool to determine the fracture field geometries connected with the active stress field. We compare the temporal variations of anisotropic parameters to the evolution of vp/vs ratio for the same seismicity. The anisotropic fast directions are used to define the active stress field (EDA model), finding a general consistence between fast direction and main stress indicators (focal mechanism and borehole break-out). The magnitude of delay time is used to define the fracture field intensity finding higher value in the volume where micro-seismicity occurs. Furthermore we studied temporal variations of anisotropic parameters and vp/vs ratio in order to explain if fluids play an important role in the earthquake generation process. The close association of anisotropic and vp/vs parameters variations and seismicity rate changes supports the hypothesis that the background seismicity is influenced by the fluctuation of pore fluid pressure in the rocks.

  18. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  19. Handwriting Development in Spanish Children With and Without Learning Disabilities: A Graphonomic Approach.

    PubMed

    Barrientos, Pablo

    The central purpose of this study was to analyze the dynamics of handwriting movements in real time for Spanish students in early grades with and without learning disabilities. The sample consisted of 120 children from Grades 1 through 3 (primary education), classified into two groups: with learning disabilities and without learning disabilities. The Early Grade Writing Assessment tasks selected for this purpose were writing the alphabet in order from memory, alphabet copying in cursive and manuscript, and allograph selection. The dynamics of these four handwriting tasks were recorded using graphonomic tablets (type Wacom Intuos-4), Intuos Inking pens, and Eye and Pen 2 software. Several events were recorded across four different tasks: velocity, pressure, time invested in pauses, and automaticity. The results demonstrated significant graphonomic variations between groups across grades, depending on the type of task.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  1. Computational Nuclear Physics and Post Hartree-Fock Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less

  2. IEP goals for school-age children with speech sound disorders.

    PubMed

    Farquharson, Kelly; Tambyraja, Sherine R; Justice, Laura M; Redle, Erin E

    2014-01-01

    The purpose of the current study was to describe the current state of practice for writing Individualized Education Program (IEP) goals for children with speech sound disorders (SSDs). IEP goals for 146 children receiving services for SSDs within public school systems across two states were coded for their dominant theoretical framework and overall quality. A dichotomous scheme was used for theoretical framework coding: cognitive-linguistic or sensory-motor. Goal quality was determined by examining 7 specific indicators outlined by an empirically tested rating tool. In total, 147 long-term and 490 short-term goals were coded. The results revealed no dominant theoretical framework for long-term goals, whereas short-term goals largely reflected a sensory-motor framework. In terms of quality, the majority of speech production goals were functional and generalizable in nature, but were not able to be easily targeted during common daily tasks or by other members of the IEP team. Short-term goals were consistently rated higher in quality domains when compared to long-term goals. The current state of practice for writing IEP goals for children with SSDs indicates that theoretical framework may be eclectic in nature and likely written to support the individual needs of children with speech sound disorders. Further investigation is warranted to determine the relations between goal quality and child outcomes. (1) Identify two predominant theoretical frameworks and discuss how they apply to IEP goal writing. (2) Discuss quality indicators as they relate to IEP goals for children with speech sound disorders. (3) Discuss the relationship between long-term goals level of quality and related theoretical frameworks. (4) Identify the areas in which business-as-usual IEP goals exhibit strong quality.

  3. VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.

    DTIC Science & Technology

    1985-09-01

    the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to

  4. User systems guidelines for software projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrahamson, L.

    1986-04-01

    This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)

  5. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  6. Travelling Wave Concepts for the Modeling and Control of Space Structures

    DTIC Science & Technology

    1988-01-31

    ZIP Code) 77 Massachusetts Avenue AFOSR / L \\\\ 0 Cambridge, MA 02139 Bolling Air Force Base , DC 20332-6448 8a. NAME OF FUNDING/SPONSORING 8b OFFICE...FQ8671-88-00398 8c. ADDRESS (City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS Building 410 PROGRAM PROJECT tASK WORK UNIT Bolling Air Force Base ...at the Jet Propulsion Laboratories, and is writing two further papers for journal publication based on his PhD dissertation. In the winter of 1987

  7. Coding, Constant Comparisons, and Core Categories: A Worked Example for Novice Constructivist Grounded Theorists.

    PubMed

    Giles, Tracey M; de Lacey, Sheryl; Muir-Cochrane, Eimear

    2016-01-01

    Grounded theory method has been described extensively in the literature. Yet, the varying processes portrayed can be confusing for novice grounded theorists. This article provides a worked example of the data analysis phase of a constructivist grounded theory study that examined family presence during resuscitation in acute health care settings. Core grounded theory methods are exemplified, including initial and focused coding, constant comparative analysis, memo writing, theoretical sampling, and theoretical saturation. The article traces the construction of the core category "Conditional Permission" from initial and focused codes, subcategories, and properties, through to its position in the final substantive grounded theory.

  8. JPRS Report, Soviet Union, Military Affairs

    DTIC Science & Technology

    1988-11-18

    instructions to take an active part in ispolkoms’ work to repair and maintain graves of Soviet soldiers and to set up a reliable system to register...He writes that "if ’dedovshchina’ [word derives from "ded," meaning grandfather, and it connotes a system of antiquated attitudes and behavior of...automatic loading system for the main gun and cutting the crew size from four to three. This would make it possible to lower the turret somewhat

  9. Build your own low-cost seismic/bathymetric recorder annotator

    USGS Publications Warehouse

    Robinson, W.

    1994-01-01

    An inexpensive programmable annotator, completely compatible with at least three models of widely used graphic recorders (Raytheon LSR-1811, Raytheon LSR-1807 M, and EDO 550) has been developed to automatically write event marks and print up to sixteen numbers on the paper record. Event mark and character printout intervals, character height and character position are all selectable with front panel switches. Operation is completely compatible with recorders running in either continuous or start-stop mode. ?? 1994.

  10. VHSIC Hardware Description Language (VHDL) Benchmark Suite

    DTIC Science & Technology

    1990-10-01

    T7 prit.iy label iTM Architecture label I flO inch Label I TIC ) P rae s Label I W Conkgurstion Spec. 1 21 Appendix B. Test Descriptions, Shell Code...Siensls R Accnss Operaist s Iov (soc 3 3 & 7 3 61 $ File I/0 S1 Reed S2 Write T Label Site TI Signal TIA Archi~ecture TIE Block TIC Port T2 VariableI...Access Operations I (sec 3 3 & 7.3 61 1 S FI I/0 1 Sl Read 52 Write T Label SreI TI Signal TIA Architeclt TIR Block TIC

  11. Mesh-matrix analysis method for electromagnetic launchers

    NASA Technical Reports Server (NTRS)

    Elliott, David G.

    1989-01-01

    The mesh-matrix method is a procedure for calculating the current distribution in the conductors of electromagnetic launchers with coil or flat-plate geometry. Once the current distribution is known the launcher performance can be calculated. The method divides the conductors into parallel current paths, or meshes, and finds the current in each mesh by matrix inversion. The author presents procedures for writing equations for the current and voltage relations for a few meshes to serve as a pattern for writing the computer code. An available subroutine package provides routines for field and flux coefficients and equation solution.

  12. Teaching Tip: Improving Students' Email Communication through an Integrated Writing Assignment in a Third-Year Toxicology Course.

    PubMed

    Kedrowicz, April A; Hammond, Sarah; Dorman, David C

    Client communication is important for success in veterinary practice, with written communication being an important means for veterinarian-client information sharing. Effective communication is adapted to clients' needs and wants, and presents information in a clear, understandable manner while accounting for varying degrees of client health literacy. This teaching tip describes the use of a mock electronic mail assignment as one way to integrate writing into a required veterinary toxicology course. As part of this project, we provide baseline data relating to students' written communication that will guide further development of writing modules in other curricula. Two independent raters analyzed students' writing using a coding scheme designed to assess adherence to the guidelines for effective written health communication. Results showed that the majority of students performed satisfactorily or required some development with respect to recommended guidelines for effective written health communication to facilitate client understanding. These findings suggest that additional instruction and practice should emphasize the importance of incorporating examples, metaphors, analogies, and pictures to create texts that are comprehensible and memorable to clients. Recommendations are provided for effective integration of writing assignments into the veterinary medicine curriculum.

  13. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  14. Plain English for Army Lawyers

    DTIC Science & Technology

    1987-05-01

    Jeremy Bentham called legal language "excre- mentitious matter" and "literary garbage" and advocated writing clear codes that everyone could...volved in today’s movement than ever before. Coke, Jefferson, Bentham -- theirs were voices crying in the wil- derness, as were the lesser known

  15. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Mathematics Teacher, 1985

    1985-01-01

    Discusses: (1) use of matrix techniques to write secret codes (includes ready-to-duplicate worksheets); (2) a method of multiplication and division of polynomials in one variable that is not tedius, time-consuming, or dependent on guesswork; and (3) adding and subtracting rational expressions and solving rational equations. (JN)

  16. Interactive Programming Support for Secure Software Development

    ERIC Educational Resources Information Center

    Xie, Jing

    2012-01-01

    Software vulnerabilities originating from insecure code are one of the leading causes of security problems people face today. Unfortunately, many software developers have not been adequately trained in writing secure programs that are resistant from attacks violating program confidentiality, integrity, and availability, a style of programming…

  17. Cost Reporting Elements and Activity Cost Tradeoffs for Defense System Software. Volume I. Study Results.

    DTIC Science & Technology

    1977-05-01

    C31) programs; (4) simulator/ trainer programs ; and (5) automatic test equipment software. Each of these five types of software represents a problem...coded in the same source language, say JOVIAL, then source—language statements would be a better measure, since that would automatically compensate...whether done at no (visible) cost or by renegotiation of the contract. Fig. 2.3 illustrates these with solid lines. It is conjec- tured that the change

  18. The elaboration of motor programs for the automation of letter production.

    PubMed

    Thibon, Laurence Séraphin; Gerber, Silvain; Kandel, Sonia

    2018-01-01

    We investigated how children learn to write letters. Letter writing evolves from stroke-by-stroke to whole-letter programming. Children of ages 6 to 9 (N=98) wrote letters of varying complexity on a digitizer. At ages 6 and 7 movement duration, dysfluency and trajectory increased with stroke number. This indicates that the motor program they activated mainly coded information on stroke production. Stroke number affected the older children's production much less, suggesting that they programmed stroke chunks or the whole letter. The fact that movement duration and dysfluency decreased from ages 6 to 8, and remained stable at ages 8 and 9 suggests that automation of letter writing begins at age 8. Automation seems to require the elaboration of stroke chunks and/or letter-sized motor programs. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. What makes computational open source software libraries successful?

    NASA Astrophysics Data System (ADS)

    Bangerth, Wolfgang; Heister, Timo

    2013-01-01

    Software is the backbone of scientific computing. Yet, while we regularly publish detailed accounts about the results of scientific software, and while there is a general sense of which numerical methods work well, our community is largely unaware of best practices in writing the large-scale, open source scientific software upon which our discipline rests. This is particularly apparent in the commonly held view that writing successful software packages is largely the result of simply ‘being a good programmer’ when in fact there are many other factors involved, for example the social skill of community building. In this paper, we consider what we have found to be the necessary ingredients for successful scientific software projects and, in particular, for software libraries upon which the vast majority of scientific codes are built today. In particular, we discuss the roles of code, documentation, communities, project management and licenses. We also briefly comment on the impact on academic careers of engaging in software projects.

  20. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  1. GSE, data management system programmers/User' manual

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.; Dolerhie, B. D., Jr.; Ghiglieri, F. J.

    1974-01-01

    The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded.

  2. Automatic Processing of Reactive Polymers

    NASA Technical Reports Server (NTRS)

    Roylance, D.

    1985-01-01

    A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.

  3. Validation of the Operating and Support Cost Model for Avionics Automatic Test Equipment (OSCATE).

    DTIC Science & Technology

    1980-06-01

    AFLCR 65-1 (56) DOD 4140 -32 (74) CODES DATA LISTED BY. ALC code, Division Code, Equipment Specialist Code, NSN DATA ORDERING SEQUENCEs This data is...PAJ6A 4140 -01-043-5035 .... IL0UERft1TfR 1002 1 319.55 22720 1 0 0 1003 0 14.55 0 0 0 10.00 0 0 1004 0 0 32.454 16.42 0 0 0 0 0 0 0 127 1101 PAJHA 4920...5320 480 CONTINUE 5330 60 To 150 5340 5350C *...*~****.*.s*..** 5360C *****eOUTPUT OPTION 7 5370C e**ss*** sae ******* 5380 500 PRINT 510 5390 510

  4. Strengths and limitations of the NATALI code for aerosol typing from multiwavelength Raman lidar observations

    NASA Astrophysics Data System (ADS)

    Nicolae, Doina; Talianu, Camelia; Vasilescu, Jeni; Nicolae, Victor; Stachlewska, Iwona S.

    2018-04-01

    A Python code was developed to automatically retrieve the aerosol type (and its predominant component in the mixture) from EARLINET's 3 backscatter and 2 extinction data. The typing relies on Artificial Neural Networks which are trained to identify the most probable aerosol type from a set of mean-layer intensive optical parameters. This paper presents the use and limitations of the code with respect to the quality of the inputed lidar profiles, as well as with the assumptions made in the aerosol model.

  5. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  6. Source Lines Counter (SLiC) Version 4.0

    NASA Technical Reports Server (NTRS)

    Monson, Erik W.; Smith, Kevin A.; Newport, Brian J.; Gostelow, Roli D.; Hihn, Jairus M.; Kandt, Ronald K.

    2011-01-01

    Source Lines Counter (SLiC) is a software utility designed to measure software source code size using logical source statements and other common measures for 22 of the programming languages commonly used at NASA and the aerospace industry. Such metrics can be used in a wide variety of applications, from parametric cost estimation to software defect analysis. SLiC has a variety of unique features such as automatic code search, automatic file detection, hierarchical directory totals, and spreadsheet-compatible output. SLiC was written for extensibility; new programming language support can be added with minimal effort in a short amount of time. SLiC runs on a variety of platforms including UNIX, Windows, and Mac OSX. Its straightforward command-line interface allows for customization and incorporation into the software build process for tracking development metrics. T

  7. Segmentation, dynamic storage, and variable loading on CDC equipment

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.

    1980-01-01

    Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.

  8. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  9. [Dispositional mindfulness modulates automatic transference of disgust into moral judgment].

    PubMed

    Sato, Atsushi; Sugiura, Yoshinori

    2014-02-01

    Previous studies showed that incidental feelings of disgust could make moral judgments more severe. In the present study, we investigated whether individual differences in mindfulness modulated automatic transference of disgust into moral judgment. Undergraduates were divided into high- and low-mindfulness groups based on the mean score on each subscale of the Five Facet Mindfulness Questionnaire (FFMQ). Participants were asked to write about a disgusting experience or an emotionally neutral experience, and then to evaluate moral (impersonal vs. high-conflict personal) and non-moral scenarios. The results showed that the disgust induction made moral judgments more severe for the low "acting with awareness" participants, whereas it did not influence the moral judgments of the high "acting with awareness" participants irrespective of type of moral dilemma. The other facets of the FFMQ did not modulate the effect of disgust on moral judgment. These findings suggest that being present prevents automatic transference of disgust into moral judgment even when prepotent emotions elicited by the thought of killing one person to save several others and utilitarian reasoning conflict.

  10. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  11. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  12. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  13. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  14. Neuroimaging during Trance State: A Contribution to the Study of Dissociation

    PubMed Central

    Peres, Julio Fernando; Moreira-Almeida, Alexander; Caixeta, Leonardo; Leao, Frederico; Newberg, Andrew

    2012-01-01

    Despite increasing interest in pathological and non-pathological dissociation, few researchers have focused on the spiritual experiences involving dissociative states such as mediumship, in which an individual (the medium) claims to be in communication with, or under the control of, the mind of a deceased person. Our preliminary study investigated psychography – in which allegedly “the spirit writes through the medium's hand” – for potential associations with specific alterations in cerebral activity. We examined ten healthy psychographers – five less expert mediums and five with substantial experience, ranging from 15 to 47 years of automatic writing and 2 to 18 psychographies per month – using single photon emission computed tomography to scan activity as subjects were writing, in both dissociative trance and non-trance states. The complexity of the original written content they produced was analyzed for each individual and for the sample as a whole. The experienced psychographers showed lower levels of activity in the left culmen, left hippocampus, left inferior occipital gyrus, left anterior cingulate, right superior temporal gyrus and right precentral gyrus during psychography compared to their normal (non-trance) writing. The average complexity scores for psychographed content were higher than those for control writing, for both the whole sample and for experienced mediums. The fact that subjects produced complex content in a trance dissociative state suggests they were not merely relaxed, and relaxation seems an unlikely explanation for the underactivation of brain areas specifically related to the cognitive processing being carried out. This finding deserves further investigation both in terms of replication and explanatory hypotheses. PMID:23166648

  15. Language style matching in writing: synchrony in essays, correspondence, and poetry.

    PubMed

    Ireland, Molly E; Pennebaker, James W

    2010-09-01

    Each relationship has its own personality. Almost immediately after a social interaction begins, verbal and nonverbal behaviors become synchronized. Even in asocial contexts, individuals tend to produce utterances that match the grammatical structure of sentences they have recently heard or read. Three projects explore language style matching (LSM) in everyday writing tasks and professional writing. LSM is the relative use of 9 function word categories (e.g., articles, personal pronouns) between any 2 texts. In the first project, 2 samples totaling 1,744 college students answered 4 essay questions written in very different styles. Students automatically matched the language style of the target questions. Overall, the LSM metric was internally consistent and reliable across writing tasks. Women, participants of higher socioeconomic status, and students who earned higher test grades matched with targets more than others did. In the second project, 74 participants completed cliffhanger excerpts from popular fiction. Judges' ratings of excerpt-response similarity were related to content matching but not function word matching, as indexed by LSM. Further, participants were not able to intentionally increase style or content matching. In the final project, an archival study tracked the professional writing and personal correspondence of 3 pairs of famous writers across their relationships. Language matching in poetry and letters reflected fluctuations in the relationships of 3 couples: Sigmund Freud and Carl Jung, Elizabeth Barrett and Robert Browning, and Sylvia Plath and Ted Hughes. Implications for using LSM as an implicit marker of social engagement and influence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  16. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  17. COMPREHENSIVE PBPK MODELING APPROACH USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM)

    EPA Science Inventory

    ERDEM, a complex PBPK modeling system, is the result of the implementation of a comprehensive PBPK modeling approach. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. It efficiently ...

  18. NeoAnalysis: a Python-based toolbox for quick electrophysiological data processing and analysis.

    PubMed

    Zhang, Bo; Dai, Ji; Zhang, Tao

    2017-11-13

    In a typical electrophysiological experiment, especially one that includes studying animal behavior, the data collected normally contain spikes, local field potentials, behavioral responses and other associated data. In order to obtain informative results, the data must be analyzed simultaneously with the experimental settings. However, most open-source toolboxes currently available for data analysis were developed to handle only a portion of the data and did not take into account the sorting of experimental conditions. Additionally, these toolboxes require that the input data be in a specific format, which can be inconvenient to users. Therefore, the development of a highly integrated toolbox that can process multiple types of data regardless of input data format and perform basic analysis for general electrophysiological experiments is incredibly useful. Here, we report the development of a Python based open-source toolbox, referred to as NeoAnalysis, to be used for quick electrophysiological data processing and analysis. The toolbox can import data from different data acquisition systems regardless of their formats and automatically combine different types of data into a single file with a standardized format. In cases where additional spike sorting is needed, NeoAnalysis provides a module to perform efficient offline sorting with a user-friendly interface. Then, NeoAnalysis can perform regular analog signal processing, spike train, and local field potentials analysis, behavioral response (e.g. saccade) detection and extraction, with several options available for data plotting and statistics. Particularly, it can automatically generate sorted results without requiring users to manually sort data beforehand. In addition, NeoAnalysis can organize all of the relevant data into an informative table on a trial-by-trial basis for data visualization. Finally, NeoAnalysis supports analysis at the population level. With the multitude of general-purpose functions provided by NeoAnalysis, users can easily obtain publication-quality figures without writing complex codes. NeoAnalysis is a powerful and valuable toolbox for users doing electrophysiological experiments.

  19. Use of data description languages in the interchange of data

    NASA Technical Reports Server (NTRS)

    Pignede, M.; Real-Planells, B.; Smith, S. R.

    1994-01-01

    The Consultative Committee for Space Data Systems (CCSDS) is developing Standards for the interchange of information between systems, including those operating under different environments. The objective is to perform the interchange automatically, i.e. in a computer interpretable manner. One aspect of the concept developed by CCSDS is the use of a separate data description to specify the data being transferred. Using the description, data can then be automatically parsed by the receiving computer. With a suitably expressive Data Description Language (DDL), data formats of arbitrary complexity can be handled. The advantages of this approach are: (1) that the description need only be written and distributed once to all users, and (2) new software does not need to be written for each new format, provided generic tools are available to support writing and interpretation of descriptions and the associated data instances. Consequently, the effort of 'hard coding' each new format is avoided and problems of integrating multiple implementations of a given format by different users are avoided. The approach is applicable in any context where computer parsable description of data could enhance efficiency (e.g. within a spacecraft control system, a data delivery system or an archive). The CCSDS have identified several candidate DDL's: EAST (Extended Ada Subset), TSDN (Transfer Syntax Data Notation) and MADEL (Modified ASN.1 as a Data Description Language -- a DDL based on the Abstract Syntax Notation One - ASN.1 - specified in the ISO/IEC 8824). This paper concentrates on ESA's development of MADEL. ESA have also developed a 'proof of concept' prototype of the required support tools, implemented on a PC under MS-DOS, which has successfully demonstrated the feasibility of the approach, including the capability within an application of retrieving and displaying particular data elements, given its MADEL description (i.e. a data description written in MADEL). This paper outlines the work done to date and assesses the applicability of this modified ASN.1 as a DDL. The feasibility of the approach is illustrated with several examples.

  20. Sequential Prediction of Literacy Achievement for Specific Learning Disabilities Contrasting in Impaired Levels of Language in Grades 4 to 9

    PubMed Central

    Sanders, Elizabeth A.; Berninger, Virginia W.; Abbott, Robert D.

    2017-01-01

    Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in grades 4–9 (N=103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n=25), word reading and spelling (dyslexia, n=60), or oral and written language (OWL LD, n=18). That is, SLDs are defined on basis of cascading level of language impairment (subword, word, and syntax/text). A 5-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% variance. Orthographic word form coding uniquely predicted nearly every measure, whereas attention switching only uniquely predicted reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and OWL LD were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed. PMID:28199175

Top