A Simple, Scalable, Script-based Science Processor
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2004-01-01
The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.
Göritz, Anja S; Birnbaum, Michael H
2005-11-01
The customizable PHP script Generic HTML Form Processor is intended to assist researchers and students in quickly setting up surveys and experiments that can be administered via the Web. This script relieves researchers from the burdens of writing new CGI scripts and building databases for each Web study. Generic HTML Form Processor processes any syntactically correct HTML forminput and saves it into a dynamically created open-source database. We describe five modes for usage of the script that allow increasing functionality but require increasing levels of knowledge of PHP and Web servers: The first two modes require no previous knowledge, and the fifth requires PHP programming expertise. Use of Generic HTML Form Processor is free for academic purposes, and its Web address is www.goeritz.net/brmic.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)
NASA Astrophysics Data System (ADS)
Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.
2005-12-01
The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.
Simple, Scalable, Script-Based Science Processor (S4P)
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)
2001-01-01
The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.
Automated Sequence Processor: Something Old, Something New
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry
2012-01-01
High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..
NASA Technical Reports Server (NTRS)
Kempler, Steve; Alcott, Gary; Lynnes, Chris; Leptoukh, Greg; Vollmer, Bruce; Berrick, Steve
2008-01-01
NASA Earth Sciences Division (ESD) has made great investments in the development and maintenance of data management systems and information technologies, to maximize the use of NASA generated Earth science data. With information management system infrastructure in place, mature and operational, very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) and the reusability for these future missions. The GES DISC has developed a series of modular, reusable data management components currently in use. They include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. Information management system components are based on atmospheric scientist inputs. Large development and maintenance cost savings can be realized through their reuse in future missions.
ERIC Educational Resources Information Center
Demetriadis, Stavros; Egerter, Tina; Hanisch, Frank; Fischer, Frank
2011-01-01
This study investigates the effectiveness of using peer review in the context of scripted collaboration to foster both domain-specific and domain-general knowledge acquisition in the computer science domain. Using a one-factor design with a script and a control condition, students worked in small groups on a series of computer science problems…
Summary of Documentation for DYNA3D-ParaDyn's Software Quality Assurance Regression Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zywicz, Edward
The Software Quality Assurance (SQA) regression test suite for DYNA3D (Zywicz and Lin, 2015) and ParaDyn (DeGroot, et al., 2015) currently contains approximately 600 problems divided into 21 suites, and is a required component of ParaDyn’s SQA plan (Ferencz and Oliver, 2013). The regression suite allows developers to ensure that software modifications do not unintentionally alter the code response. The entire regression suite is run prior to permanently incorporating any software modification or addition. When code modifications alter test problem results, the specific cause must be determined and fully understood before the software changes and revised test answers can bemore » incorporated. The regression suite is executed on LLNL platforms using a Python script and an associated data file. The user specifies the DYNA3D or ParaDyn executable, number of processors to use, test problems to run, and other options to the script. The data file details how each problem and its answer extraction scripts are executed. For each problem in the regression suite there exists an input deck, an eight-processor partition file, an answer file, and various extraction scripts. These scripts assemble a temporary answer file in a specific format from the simulation results. The temporary and stored answer files are compared to a specific level of numerical precision, and when differences are detected the test problem is flagged as failed. Presently, numerical results are stored and compared to 16 digits. At this accuracy level different processor types, compilers, number of partitions, etc. impact the results to various degrees. Thus, for consistency purposes the regression suite is run with ParaDyn using 8 processors on machines with a specific processor type (currently the Intel Xeon E5530 processor). For non-parallel regression problems, i.e., the two XFEM problems, DYNA3D is used instead. When environments or platforms change, executables using the current source code and the new resource are created and the regression suite is run. If differences in answers arise, the new answers are retained provided that the differences are inconsequential. This bootstrap approach allows the test suite answers to evolve in a controlled manner with a high level of confidence. Developers also run the entire regression suite with (serial) DYNA3D. While these results normally differ from the stored (parallel) answers, abnormal termination or wildly different values are strong indicators of potential issues.« less
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Technical Reports Server (NTRS)
Berrick, Stephen; Lynnes, Christopher
2007-01-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Astrophysics Data System (ADS)
Berrick, S. W.; Lynnes, C.
2007-12-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.
The Departmental Script as an Ongoing Conversation into the Phronesis of Teaching Science as Inquiry
NASA Astrophysics Data System (ADS)
Melville, Wayne; Campbell, Todd; Fazio, Xavier; Bartley, Anthony
2012-12-01
This article investigates the extent to which a science department script supports the teaching and learning of science as inquiry and how this script is translated into individual teachers' classrooms. This study was completed at one school in Canada which, since 2000, has developed a departmental script supportive of teaching and learning of science as inquiry. Through a mixed-method strategy, multiple data sources were drawn together to inform a cohesive narrative about scripts, science departments, and individual classrooms. Results of the study reveal three important findings: (1) the departmental script is not an artefact, but instead is an ongoing conversation into the episteme, techne and phronesis of science teaching; (2) the consistently reformed teaching practices that were observed lead us to believe that a departmental script has the capacity to enhance the teaching of science as inquiry; and, (3) the existence of a departmental script does not mean that teaching will be `standardized' in the bureaucratic sense of the word. Our findings indicate that a departmental script can be considered to concurrently operate as an epistemic script that is translated consistently across the classes, and a social script that was more open to interpretation within individual teachers' classrooms.
European Science Notes Information Bulletin Reports on Current European/ Middle Eastern Science
1988-08-01
problems, and infrastructure and in- terfacing requirements. Development of Finite Element Software for Transputer-Based Parallel Processors ...Introduction will it be possible to harness these processors together to work on a common problem. The feasibility study at the UK’s Kent University for One of...the many problems in harnessing the power development of a distributed supercomputer is being of a large number of processors on a single problem is
Using R in Taverna: RShell v1.2
Wassink, Ingo; Rauwerda, Han; Neerincx, Pieter BT; Vet, Paul E van der; Breit, Timo M; Leunissen, Jack AM; Nijholt, Anton
2009-01-01
Background R is the statistical language commonly used by many life scientists in (omics) data analysis. At the same time, these complex analyses benefit from a workflow approach, such as used by the open source workflow management system Taverna. However, Taverna had limited support for R, because it supported just a few data types and only a single output. Also, there was no support for graphical output and persistent sessions. Altogether this made using R in Taverna impractical. Findings We have developed an R plugin for Taverna: RShell, which provides R functionality within workflows designed in Taverna. In order to fully support the R language, our RShell plugin directly uses the R interpreter. The RShell plugin consists of a Taverna processor for R scripts and an RShell Session Manager that communicates with the R server. We made the RShell processor highly configurable allowing the user to define multiple inputs and outputs. Also, various data types are supported, such as strings, numeric data and images. To limit data transport between multiple RShell processors, the RShell plugin also supports persistent sessions. Here, we will describe the architecture of RShell and the new features that are introduced in version 1.2, i.e.: i) Support for R up to and including R version 2.9; ii) Support for persistent sessions to limit data transfer; iii) Support for vector graphics output through PDF; iv)Syntax highlighting of the R code; v) Improved usability through fewer port types. Our new RShell processor is backwards compatible with workflows that use older versions of the RShell processor. We demonstrate the value of the RShell processor by a use-case workflow that maps oligonucleotide probes designed with DNA sequence information from Vega onto the Ensembl genome assembly. Conclusion Our RShell plugin enables Taverna users to employ R scripts within their workflows in a highly configurable way. PMID:19607662
The Next Generation of Ground Operations Command and Control; Scripting in C no. and Visual Basic
NASA Technical Reports Server (NTRS)
Ritter, George; Pedoto, Ramon
2010-01-01
Scripting languages have become a common method for implementing command and control solutions in space ground operations. The Systems Test and Operations Language (STOL), the Huntsville Operations Support Center (HOSC) Scripting Language Processor (SLP), and the Spacecraft Control Language (SCL) offer script-commands that wrap tedious operations tasks into single calls. Since script-commands are interpreted, they also offer a certain amount of hands-on control that is highly valued in space ground operations. Although compiled programs seem to be unsuited for interactive user control and are more complex to develop, Marshall Space flight Center (MSFC) has developed a product called the Enhanced and Redesign Scripting (ERS) that makes use of the graphical and logical richness of a programming language while offering the hands-on and ease of control of a scripting language. ERS is currently used by the International Space Station (ISS) Payload Operations Integration Center (POIC) Cadre team members. ERS integrates spacecraft command mnemonics, telemetry measurements, and command and telemetry control procedures into a standard programming language, while making use of Microsoft's Visual Studio for developing Visual Basic (VB) or C# ground operations procedures. ERS also allows for script-style user control during procedure execution using a robust graphical user input and output feature. The availability of VB and C# programmers, and the richness of the languages and their development environment, has allowed ERS to lower our "script" development time and maintenance costs at the Marshall POIC.
QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.
Barnhoorn, Jonathan S; Haasnoot, Erwin; Bocanegra, Bruno R; van Steenbergen, Henk
2015-12-01
Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times.
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Hennrikus, Eileen F; Skolka, Michael P; Hennrikus, Nicholas
2018-01-01
Medical school curriculum continues to search for methods to develop a conceptual educational framework that promotes the storage, retrieval, transfer, and application of basic science to the human experience. To achieve this goal, we propose a metacognitive approach that integrates basic science with the humanistic and health system aspects of medical education. During the week, via problem-based learning and lectures, first-year medical students were taught the basic science underlying a disease. Each Friday, a patient with the disease spoke to the class. Students then wrote illness scripts, which required them to metacognitively reflect not only on disease pathophysiology, complications, and treatments but also on the humanistic and health system issues revealed during the patient encounter. Evaluation of the intervention was conducted by measuring results on course exams and national board exams and analyzing free responses on the illness scripts and student course feedback. The course exams and National Board of Medical Examiners questions were divided into 3 categories: content covered in lecture, problem-based learning, or patient + illness script. Comparisons were made using Student t -test. Free responses were inductively analyzed using grounded theory methodology. This curricular intervention was implemented during the first 13-week basic science course of medical school. The main objective of the course, Scientific Principles of Medicine, is to lay the scientific foundation for subsequent organ system courses. A total of 150 students were enrolled each year. We evaluated this intervention over 2 years, totaling 300 students. Students scored significantly higher on illness script content compared to lecture content on the course exams (mean difference = 11.1, P = .006) and national board exams given in December (mean difference = 21.8, P = .0002) and June (mean difference = 12.7, P = .016). Themes extracted from students' free responses included the following: relevance of basic science, humanistic themes of empathy, resilience, and the doctor-patient relationship, and systems themes of cost, barriers to care, and support systems. A metacognitive approach to learning through the use of patient encounters and illness script reflections creates stronger conceptual frameworks for students to integrate, store, retain, and retrieve knowledge.
Design, Development, and Testing of a Network Frequency Selection Service (NFSS)
1994-02-14
mercial simulation software (Sim++), word processor ( FrameMaker ), editor (Gnu Emacs), software ver- sion control (Revision Control System (RCS)), system...of FrameMaker ".mif" files. When viewed using FrameMaker or a PostScript reader, each page of results appears as two columns by four rows of graphics
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1989-01-01
The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.
Teacher Scripts in Science Teaching
ERIC Educational Resources Information Center
Monteiro, Rute; Carrillo, Jose; Aguaded, Santiago
2010-01-01
Awareness of teacher scripts is of crucial importance to reflection on practice, and represents one means of widening the scope of classroom performance. The first part of this work provides a full description of three scripts employed by a novice science teacher within the topic of The "Structure of Flowers", and offers a detailed illustration…
Scripted and Unscripted Science Lessons for Children with Autism and Intellectual Disability.
Knight, Victoria F; Collins, Belva; Spriggs, Amy D; Sartini, Emily; MacDonald, Margaret Janey
2018-02-27
Both scripted lessons and unscripted task analyzed lessons have been used effectively to teach science content to students with intellectual disability and autism spectrum disorder. This study evaluated the efficacy, efficiency, and teacher preference of scripted and unscripted task analyzed lesson plans from an elementary science curriculum designed for students with intellectual disability and autism spectrum disorder by evaluating both lesson formats for (a) student outcomes on a science comprehension assessment, (b) sessions to criterion, and (c) average duration of lessons. Findings propose both lesson types were equally effective, but unscripted task analyzed versions may be more efficient and were preferred by teachers to scripted lessons. Implications, limitations, and suggestions for future research are also discussed.
Jayashree, B; Rajgopal, S; Hoisington, D; Prasanth, V P; Chandra, S
2008-09-24
Structure, is a widely used software tool to investigate population genetic structure with multi-locus genotyping data. The software uses an iterative algorithm to group individuals into "K" clusters, representing possibly K genetically distinct subpopulations. The serial implementation of this programme is processor-intensive even with small datasets. We describe an implementation of the program within a parallel framework. Speedup was achieved by running different replicates and values of K on each node of the cluster. A web-based user-oriented GUI has been implemented in PHP, through which the user can specify input parameters for the programme. The number of processors to be used can be specified in the background command. A web-based visualization tool "Visualstruct", written in PHP (HTML and Java script embedded), allows for the graphical display of population clusters output from Structure, where each individual may be visualized as a line segment with K colors defining its possible genomic composition with respect to the K genetic sub-populations. The advantage over available programs is in the increased number of individuals that can be visualized. The analyses of real datasets indicate a speedup of up to four, when comparing the speed of execution on clusters of eight processors with the speed of execution on one desktop. The software package is freely available to interested users upon request.
NASA Astrophysics Data System (ADS)
Midland, Susan
Media specialists are increasingly assuming professional development roles as they collaborate with teachers to design instruction that combines content with technology. I am a media specialist in an independent school, and collaborated with two science teachers over a three-year period to integrate technology with their instruction. This action study explored integration of a digital narrative project in three eighth-grade earth science units and one ninth-grade physics unit with each unit serving as a cycle of research. Students produced short digital documentaries that combined still images with an accompanying narration. Students participating in the project wrote scripts based on selected science topics. The completed scripts served as the basis for the narratives. These projects were compared with a more traditional science writing project. Barriers and facilitators for implementation of this type of media project in a science classroom were identified. Lack of adequate access to computers proved to be a significant mechanical barrier. Acquisition of a laptop cart reduced but did not eliminate the technology access issues. The complexity of the project increased implementation time in comparison with traditional alternatives. Evaluation of the completed media projects presented problems. Scores by outside evaluators reflected evaluator unfamiliarity with assessing multimedia projects rather than student performance. Despite several revisions of the assessment rubric, low inter-rater reliability remained a concern even in the last cycle. This suggests that evaluation of media could present issues for teachers who attempt projects of this kind. A writing frame was developed to facilitate production of scripts. This reduced the time required to produce the scripts, but produced writing that was formulaic in the teacher's estimate. A graphic organizer was adopted in the final cycle to address this concern. New insights emerged as the study progressed through the four cycles of the study. At the conclusion of the study, the two teachers and I had a better understanding of barriers that can prevent smooth integration of a technology-based project.
Next Generation Space Telescope Integrated Science Module Data System
NASA Technical Reports Server (NTRS)
Schnurr, Richard G.; Greenhouse, Matthew A.; Jurotich, Matthew M.; Whitley, Raymond; Kalinowski, Keith J.; Love, Bruce W.; Travis, Jeffrey W.; Long, Knox S.
1999-01-01
The Data system for the Next Generation Space Telescope (NGST) Integrated Science Module (ISIM) is the primary data interface between the spacecraft, telescope, and science instrument systems. This poster includes block diagrams of the ISIM data system and its components derived during the pre-phase A Yardstick feasibility study. The poster details the hardware and software components used to acquire and process science data for the Yardstick instrument compliment, and depicts the baseline external interfaces to science instruments and other systems. This baseline data system is a fully redundant, high performance computing system. Each redundant computer contains three 150 MHz power PC processors. All processors execute a commercially available real time multi-tasking operating system supporting, preemptive multi-tasking, file management and network interfaces. These six processors in the system are networked together. The spacecraft interface baseline is an extension of the network, which links the six processors. The final selection for Processor busses, processor chips, network interfaces, and high-speed data interfaces will be made during mid 2002.
Simple, Script-Based Science Processing Archive
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle
2007-01-01
The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.
Object-based media and stream-based computing
NASA Astrophysics Data System (ADS)
Bove, V. Michael, Jr.
1998-03-01
Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.
Aryanto, K Y E; Broekema, A; Langenhuysen, R G A; Oudkerk, M; van Ooijen, P M A
2015-05-01
To develop and test a fast and easy rule-based web-environment with optional de-identification of imaging data to facilitate data distribution within a hospital environment. A web interface was built using Hypertext Preprocessor (PHP), an open source scripting language for web development, and Java with SQL Server to handle the database. The system allows for the selection of patient data and for de-identifying these when necessary. Using the services provided by the RSNA Clinical Trial Processor (CTP), the selected images were pushed to the appropriate services using a protocol based on the module created for the associated task. Five pipelines, each performing a different task, were set up in the server. In a 75 month period, more than 2,000,000 images are transferred and de-identified in a proper manner while 20,000,000 images are moved from one node to another without de-identification. While maintaining a high level of security and stability, the proposed system is easy to setup, it integrate well with our clinical and research practice and it provides a fast and accurate vendor-neutral process of transferring, de-identifying, and storing DICOM images. Its ability to run different de-identification processes in parallel pipelines is a major advantage in both clinical and research setting.
MicroShell Minimalist Shell for Xilinx Microprocessors
NASA Technical Reports Server (NTRS)
Werne, Thomas A.
2011-01-01
MicroShell is a lightweight shell environment for engineers and software developers working with embedded microprocessors in Xilinx FPGAs. (MicroShell has also been successfully ported to run on ARM Cortex-M1 microprocessors in Actel ProASIC3 FPGAs, but without project-integration support.) Micro Shell decreases the time spent performing initial tests of field-programmable gate array (FPGA) designs, simplifies running customizable one-time-only experiments, and provides a familiar-feeling command-line interface. The program comes with a collection of useful functions and enables the designer to add an unlimited number of custom commands, which are callable from the command-line. The commands are parameterizable (using the C-based command-line parameter idiom), so the designer can use one function to exercise hardware with different values. Also, since many hardware peripherals instantiated in FPGAs have reasonably simple register-mapped I/O interfaces, the engineer can edit and view hardware parameter settings at any time without stopping the processor. MicroShell comes with a set of support scripts that interface seamlessly with Xilinx's EDK tool. Adding an instance of MicroShell to a project is as simple as marking a check box in a library configuration dialog box and specifying a software project directory. The support scripts then examine the hardware design, build design-specific functions, conditionally include processor-specific functions, and complete the compilation process. For code-size constrained designs, most of the stock functionality can be excluded from the compiled library. When all of the configurable options are removed from the binary, MicroShell has an unoptimized memory footprint of about 4.8 kB and a size-optimized footprint of about 2.3 kB. Since MicroShell allows unfettered access to all processor-accessible memory locations, it is possible to perform live patching on a running system. This can be useful, for instance, if a bug is discovered in a routine but the system cannot be rebooted: Shell allows a skilled operator to directly edit the binary executable in memory. With some forethought, MicroShell code can be located in a different memory location from custom code, permitting the custom functionality to be overwritten at any time without stopping the controlling shell.
Representing Science through Historical Drama: "Lord Kelvin and the Age of the Earth Debate"
ERIC Educational Resources Information Center
Begoray, Deborah L.; Stinner, Arthur
2005-01-01
This paper presents a defense for the use of historical scripted conversations in science. We discuss drama's use of both expository and narrative text forms to expand the language forms available for a variety of learners, the use of scripted conversations as a defensible curriculum design to foster learning in general and science in particular,…
Accurate Arabic Script Language/Dialect Classification
2014-01-01
Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification
Enacting the Common Script: Management Ideas at Finnish Universities of Applied Sciences
ERIC Educational Resources Information Center
Vuori, Johanna
2015-01-01
This article discusses the work of mid-level management at Finnish universities of applied sciences. Based on in-depth interviews with 15 line managers, this study investigates how the standardized management ideas of rational management and employee empowerment are used in the leadership of lecturers at these institutions. The findings indicate…
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Socializing Respect and Knowledge in a Racially Integrated Science Classroom
ERIC Educational Resources Information Center
Solis, Jorge; Kattan, Shlomy; Baquedano-Lopez, Patricia
2009-01-01
In this article we examine the socialization of respect in a racially integrated science classroom in Northern California that employed a character education program called Tribes. We focus on the ways scripts derived from this program are enacted during Community Circle activities and how breaches to these scripts and the norms of respectful…
NASA Astrophysics Data System (ADS)
Alcott, G.; Kempler, S.; Lynnes, C.; Leptoukh, G.; Vollmer, B.; Berrick, S.
2008-12-01
NASA Earth Sciences Division (ESD), and its preceding Earth science organizations, has made great investments in the development and maintenance of data management systems, as well as information technologies, for the purpose of maximizing the use and usefulness of NASA generated Earth science data. Earth science information systems, evolving with the maturation and implementation of advancing technologies, reside at NASA data centers, known as Distributed Active Archive Centers (DAACs). With information management system infrastructure in place, and system data and user services already developed and operational, only very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) (one of NASAs DAACs) and their potential reuse for these future missions. After 14 years working with instrument teams and the broader science community, GES DISC personnel expertise in atmospheric, water cycle, and atmospheric modeling data and information services, as well as Earth science missions, information system engineering, operations, and user services have developed a series of modular, reusable data management components currently is use in several projects. The knowledge and experience gained at the GES DISC lend themselves to providing science driven information systems in the areas of aerosols, clouds, and atmospheric chemicals to be measured by recommended Decadal Survey missions. Available reusable capabilities include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. In addition, recent enhancements, such as Open Geospatial Consortium (OGC), Inc. interoperability implementations and data fusion prototypes, will be described. As a result of the information management systems developed by NASAs GES DISC, not only are large cost savings realized through system reuse, but maintenance costs are also minimized due to the simplicity of their implementations.
Effective self-regulated science learning through multimedia-enriched skeleton concept maps
NASA Astrophysics Data System (ADS)
Marée, Ton J.; van Bruggen, Jan M.; Jochems, Wim M. G.
2013-04-01
Background: This study combines work on concept mapping with scripted collaborative learning. Purpose: The objective was to examine the effects of self-regulated science learning through scripting students' argumentative interactions during collaborative 'multimedia-enriched skeleton concept mapping' on meaningful science learning and retention. Programme description: Each concept in the enriched skeleton concept map (ESCoM) contained annotated multimedia-rich content (pictures, text, animations or video clips) that elaborated the concept, and an embedded collaboration script to guide students' interactions. Sample: The study was performed in a Biomolecules course on the Bachelor of Applied Science program in the Netherlands. All first-year students (N=93, 31 women, 62 men, aged 17-33 years) took part in this study. Design and methods: The design used a control group who received the regular course and an experimental group working together in dyads on an ESCoM under the guidance of collaboration scripts. In order to investigate meaningful understanding and retention, a retention test was administered a month after the final exam. Results: Analysis of covariance demonstrated a significant experimental effect on the Biomolecules exam scores between the experimental group and the control, and the difference between the groups on the retention test also reached statistical significance. Conclusions: Scripted collaborative multimedia ESCoM mapping resulted in meaningful understanding and retention of the conceptual structure of the domain, the concepts, and their relations. Not only was scripted collaborative multimedia ESCoM mapping more effective than the traditional teaching approach, it was also more efficient in requiring far less teacher guidance.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
ERIC Educational Resources Information Center
Lee, Yuan-Hsuan
2018-01-01
Premised on Web 2.0 technology, the current study investigated the effect of facilitating critical thinking using the Collaborative Questioning, Reading, Answering, and Checking (C-QRAC) collaboration script on university students' science reading literacy in flipped learning conditions. Participants were 85 Taiwanese university students recruited…
Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors
NASA Technical Reports Server (NTRS)
Flatley, Thomas P.
2015-01-01
SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.
NASA Astrophysics Data System (ADS)
Peleg, R.; Baram-Tsabari, A.
2016-10-01
Science museums often introduce plays to liven up exhibits, attract visitors to specific exhibitions, and help visitors to "digest" difficult content. Most previous research has concentrated on viewers' learning outcomes. This study uses performance and spectator analyses from the field of theater studies to explore the link between producers' intended aims, the written script, and the learning outcomes. We also use the conflict of didactics and aesthetics, common to the design of both educational plays and science museum exhibits, as a lens for understanding our data. "Darwin's journey," a play about evolution, was produced by a major science museum in Israel. The producers' objectives were collected through in-depth interviews. A structural analysis was conducted on the script. Viewer ( n = 103) and nonviewer ( n = 90) data were collected via a questionnaire. The results show strong evidence for the encoding of all of the producers' aims in the script. Explicit and cognitive aims were decoded as intended by the viewers. The evidence was weak for the decoding of implicit and affective aims. While the producers were concerned with the conflict of didactics and aesthetics, this conflict was not apparent in the script. The conflict is discussed within the broader context of science education in informal settings.
NASA Astrophysics Data System (ADS)
Adler, David S.; Workman, William M., III; Chance, Don
2004-09-01
The Science and Mission Scheduling Branch (SMSB) of the Space Telescope Science Institute (STScI) historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SMSB recently transitioned to Unix operations. No additional resources were available to the group; the project was SMSB's to design, develop, and implement. Early decisions included the choice of Python as the primary scripting language; adoption of Object-Oriented Design in the development of base utilities; and the development of a Python utility to interact directly with the Sybase database. The project was completed in January 2004 with the implementation of a GUI to generate the Command Loads that are uplinked to HST. The current tool suite consists of 31 utilities and 271 tools comprising over 60,000 lines of code. In this paper, we summarize the decision-making process used to determine the primary scripting language, database interface, and code management library. We also describe the finished product and summarize lessons learned along the way to completing the project.
Advanced Hybrid On-Board Science Data Processor - SpaceCube 2.0
NASA Technical Reports Server (NTRS)
Flatley, Tom
2010-01-01
Topics include an overview of On-board science data processing, software upset mitigation, on-board data reduction, on-board products, HyspIRI demonstration testbed, SpaceCube 2.0 block diagram, and processor comparison.
Dockres: a computer program that analyzes the output of virtual screening of small molecules
2010-01-01
Background This paper describes a computer program named Dockres that is designed to analyze and summarize results of virtual screening of small molecules. The program is supplemented with utilities that support the screening process. Foremost among these utilities are scripts that run the virtual screening of a chemical library on a large number of processors in parallel. Methods Dockres and some of its supporting utilities are written Fortran-77; other utilities are written as C-shell scripts. They support the parallel execution of the screening. The current implementation of the program handles virtual screening with Autodock-3 and Autodock-4, but can be extended to work with the output of other programs. Results Analysis of virtual screening by Dockres led to both active and selective lead compounds. Conclusions Analysis of virtual screening was facilitated and enhanced by Dockres in both the authors' laboratories as well as laboratories elsewhere. PMID:20205801
Environmental Epidemiology Program
accessible with JavaScript activated. Utah Department of Health Bureau of Epidemiology Environmental Epidemiology Program (EEP) The Environmental Epidemiology Program strives to improve the health of Utah residents through science-based environmental health policy and by empowering citizens with knowledge about
Windsor, Richard; Clark, Jeannie; Cleary, Sean; Davis, Amanda; Thorn, Stephanie; Abroms, Lorien; Wedeles, John
2014-01-01
This study evaluated the effectiveness of the Smoking Cessation and Reduction in Pregnancy Treatment (SCRIPT) Program selected by the West Virginia-Right From The Start Project for state-wide dissemination. A process evaluation documented the fidelity of SCRIPT delivery by Designated Care Coordinators (DCC), licensed nurses and social workers who provide home-based case management to Medicaid-eligible clients in all 55 counties. We implemented a quasi-experimental, non-randomized, matched Comparison (C) Group design. The SCRIPT Experimental E Group (N = 259) were all clients in 2009-2010 that wanted to quit, provided a screening carbon monoxide (CO), and received a SCRIPT home visit. The (C) Group was derived from all clients in 2006-2007 who had the same CO assessments as E Group clients and reported receiving cessation counseling. We stratified the baseline CO of E Group clients into 10 strata, and randomly selected the same number of (C) Group clients (N = 259) from each matched strata to evaluate the effectiveness of the SCRIPT Program. There were no significant baseline differences in the E and (C) Group. A Process Evaluation documented a significant increase in the fidelity of DCC delivery of SCRIPT Program procedures: from 63 % in 2006 to 74 % in 2010. Significant increases were documented in the E Group cessation rate (+9.3 %) and significant reduction rate (+4.5 %), a ≥50 % reduction from a baseline CO. Perinatal health case management staff can deliver the SCRIPT Program, and Medicaid-supported clients can change smoking behavior, even very late in pregnancy. When multiple biases were analyzed, we concluded the SCRIPT Dissemination Project was the most plausible reason for the significant changes in behavior.
Life sciences flight experiments microcomputer
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1987-01-01
A promising microcomputer configuration for the Spacelab Life Sciences Lab. Equipment inventory consists of multiple processors. One processor's use is reserved, with additional processors dedicated to real time input and output operations. A simple form of such a configuration, with a processor board for analog to digital conversion and another processor board for digital to analog conversion, was studied. The system used digital parallel data lines between the boards, operating independently of the system bus. Good performance of individual components was demonstrated: the analog to digital converter was at over 10,000 samples per second. The combination of the data transfer between boards with the input or output functions on each board slowed performance, with a maximum throughput of 2800 to 2900 analog samples per second. Any of several techniques, such as use of the system bus for data transfer or the addition of direct memory access hardware to the processor boards, should give significantly improved performance.
JPRS Report, Science & Technology, Europe.
1991-04-30
processor in collaboration with Intel . The processor , christened Touchstone, will be used as the core of a parallel computer with 2,000 processors . One of...ELECTRONIQUE HEBDO in French 24 Jan 91 pp 14-15 [Article by Claire Remy: "Everything Set for Neural Signal Processors " first paragraph is ELECTRONIQUE...paving the way for neural signal processors in so doing. The principal advantage of this specific circuit over a neuromimetic software program is
The Transition from VMS to Unix Operations for STScI's Science Planning and Scheduling Team
NASA Astrophysics Data System (ADS)
Adler, D. S.; Taylor, D. K.
The Science Planning and Scheduling Team of the Space Telescope Science Institute currently uses the VMS operating system. SPST began a transition to Unix-based operations in the summer of 1999. The main tasks for SPST to address in the Unix transition are: (1) converting the current SPST operational tools from DCL to Python; (2) converting our database report scripts from SQL; (3) adopting a Unix-based code management system; and (4) training the SPST staff. The goal is to fully transition the team to Unix operations by the end of 2001.
A Course on Reconfigurable Processors
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Huss, Sorin A.
2010-01-01
Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…
Simulating Synchronous Processors
1988-06-01
34f Fvtvru m LABORATORY FOR INMASSACHUSETTSFCOMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-359 SIMULATING SYNCHRONOUS PROCESSORS Jennifer Lundelius Welch...PROJECT TASK WORK UNIT Arlington, VA 22217 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE Include Security Classification) Simulating Synchronous Processors...necessary and identify by block number) In this paper we show how a distributed system with synchronous processors and asynchro- nous message delays can
A digital beamforming processor for the joint DoD/NASA space based radar mission
NASA Technical Reports Server (NTRS)
Fischman, Mark A.; Le, Charles; Rosen, Paul A.
2004-01-01
The Space Based Radar (SBR) program includes a joint technology demonstration between NASA and the Air Force to design a low-earth orbiting, 2x50 m L-band radar system for both Earth science and intelligence related observations.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high-performance signal processing include significant increase in onboard science data processing, enabling orders of magnitude reduction in required communication bandwidth for science data return, orders of magnitude improvement in onboard mission planning and critical decision making, and the ability to rapidly respond to changing mission environments, thus enabling opportunistic science and orders of magnitude reduction in the cost of mission operations through reduction of required staff. Additional benefits of COTS-based, high-performance signal processing include the ability to leverage considerable commercial and academic investments in advanced computing tools, techniques, and infra structure, and the familiarity of the science and IT community with these computing environments.
Cloud-Based Tools to Support High-Resolution Modeling (Invited)
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Swain, N.; Christensen, S.
2013-12-01
The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
Earth Science Curriculum Enrichment Through Matlab!
NASA Astrophysics Data System (ADS)
Salmun, H.; Buonaiuto, F. S.
2016-12-01
The use of Matlab in Earth Science undergraduate courses in the Department of Geography at Hunter College began as a pilot project in Fall 2008 and has evolved and advanced to being a significant component of an Advanced Oceanography course, the selected tool for data analysis in other courses and the main focus of a graduate course for doctoral students at The city University of New York (CUNY) working on research related to geophysical, oceanic and atmospheric dynamics. The primary objectives of these efforts were to enhance the Earth Science curriculum through course specific applications, to increase undergraduate programming and data analysis skills, and to develop a Matlab users network within the Department and the broader Hunter College and CUNY community. Students have had the opportunity to learn Matlab as a stand-alone course, within an independent study group, or as a laboratory component within related STEM classes. All of these instructional efforts incorporated the use of prepackaged Matlab exercises and a research project. Initial exercises were designed to cover basic scripting and data visualization techniques. Students were provided data and a skeleton script to modify and improve upon based on the laboratory instructions. As student's programming skills increased throughout the semester more advanced scripting, data mining and data analysis were assigned. In order to illustrate the range of applications within the Earth Sciences, laboratory exercises were constructed around topics selected from the disciplines of Geology, Physics, Oceanography, Meteorology and Climatology. In addition the structure of the research component of the courses included both individual and team projects.
Fault tolerant, radiation hard, high performance digital signal processor
NASA Technical Reports Server (NTRS)
Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke
1990-01-01
An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.
Writing for Learning in Science: Producing a Video Script on Light.
ERIC Educational Resources Information Center
Lorenzo, Mercedes; Hand, Brian; Prain, Vaughan
2001-01-01
Reports on a task in which students wrote scripts for a silent movie to consolidate their understanding of the subject of light. Considers the broader implications for effective task design, implementation, and review of this kind of writing. (Author/ASK)
NASA Astrophysics Data System (ADS)
Hellman, Leslie G.
This qualitative study uses children's writing to explore the divide between a conception of Science as a humanistic discipline reliant on creativity, ingenuity and out of the box thinking and a persistent public perception of science and scientists as rigid and methodical. Artifacts reviewed were 506 scripts written during 2014 and 2016 by 5th graders participating in an out-of classroom, mentor supported, free-choice 10-week arts and literacy initiative. 47% (237) of these scripts were found to contain content relating to Science, Scientists, Science Education and the Nature of Science. These 237 scripts were coded for themes; characteristics of named scientist characters were tracked and analyzed. Findings included NOS understandings being expressed by representation of Science and Engineering Practices; Ingenuity being primarily linked to Engineering tasks; common portrayals of science as magical or scientists as villains; and a persistence in negative stereotypes of scientists, including a lack of gender equity amongst the named scientist characters. Findings suggest that representations of scientists in popular culture highly influence the portrayals of scientists constructed by the students. Recommendations to teachers include encouraging explicit consideration of big-picture NOS concepts such as ethics during elementary school and encouraging the replacement of documentary or educational shows with more engaging fictional media.
Project Report: Automatic Sequence Processor Software Analysis
NASA Technical Reports Server (NTRS)
Benjamin, Brandon
2011-01-01
The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.
Using a Multicore Processor for Rover Autonomous Science
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin; Estlin, Tara; Clement, Bradley; Springer, Paul
2011-01-01
Multicore processing promises to be a critical component of future spacecraft. It provides immense increases in onboard processing power and provides an environment for directly supporting fault-tolerant computing. This paper discusses using a state-of-the-art multicore processor to efficiently perform image analysis onboard a Mars rover in support of autonomous science activities.
Monitoring Temperature and Fan Speed Using Ganglia and Winbond Chips
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaffrey, Cattie; /SLAC
2006-09-27
Effective monitoring is essential to keep a large group of machines, like the ones at Stanford Linear Accelerator Center (SLAC), up and running. SLAC currently uses Ganglia Monitoring System to observe about 2000 machines, analyzing metrics like CPU usage and I/O rate. However, metrics essential to machine hardware health, such as temperature and fan speed, are not being monitored. Many machines have a Winbond w83782d chip which monitors three temperatures, two of which come from dual CPUs, and returns the information when the sensor command is invoked. Ganglia also provides a feature, gmetric, that allows the users to monitor theirmore » own metrics and incorporate them into the monitoring system. The programming language Perl is chosen to implement a script that invokes the sensors command, extracts the temperature and fan speed information, and calls gmetric with the appropriate arguments. Two machines were used to test the script; the two CPUs on each machine run at about 65 Celsius, which is well within the operating temperature range (The maximum safe temperature range is 77-82 Celsius for the Pentium III processors being used). Installing the script on all machines with a Winbond w83782d chip allows the SLAC Scientific Computing and Computing Services group (SCCS) to better evaluate current cooling methods.« less
50 CFR 679.94 - Economic data report (EDR) for the Amendment 80 sector.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: NMFS, Alaska Fisheries Science Center, Economic Data Reports, 7600 Sand Point Way NE, F/AKC2, Seattle... Operation Description of code Code NMFS Alaska region ADF&G FCP Catcher/processor Floating catcher processor. FLD Mothership Floating domestic mothership. IFP Stationary Floating Processor Inshore floating...
Primary pre-service teachers' skills in planning a guided scientific inquiry
NASA Astrophysics Data System (ADS)
García-Carmona, Antonio; Criado, Ana M.; Cruz-Guzmán, Marta
2017-10-01
A study is presented of the skills that primary pre-service teachers (PPTs) have in completing the planning of a scientific inquiry on the basis of a guiding script. The sample comprised 66 PPTs who constituted a group-class of the subject Science Teaching, taught in the second year of an undergraduate degree in primary education at a Spanish university. The data was acquired from the responses of the PPTs (working in teams) to open-ended questions posed to them in the script concerning the various tasks involved in a scientific inquiry (formulation of hypotheses, design of the experiment, data collection, interpretation of results, drawing conclusions). Data were analyzed within the framework of a descriptive-interpretive qualitative research study with a combination of inter- and intra-rater methods, and the use of low-inference descriptors. The results showed that the PPTs have major shortcomings in planning the complete development of a guided scientific inquiry. The discussion of the results includes a number of implications for rethinking the Science Teaching course so that PPTs can attain a basic level of training in inquiry-based science education.
Development of Web-Based Examination System Using Open Source Programming Model
ERIC Educational Resources Information Center
Abass, Olalere A.; Olajide, Samuel A.; Samuel, Babafemi O.
2017-01-01
The traditional method of assessment (examination) is often characterized by examination questions leakages, human errors during marking of scripts and recording of scores. The technological advancement in the field of computer science has necessitated the need for computer usage in majorly all areas of human life and endeavors, education sector…
The Latent Structure of Secure Base Script Knowledge
ERIC Educational Resources Information Center
Waters, Theodore E. A.; Fraley, R. Chris; Groh, Ashley M.; Steele, Ryan D.; Vaughn, Brian E.; Bost, Kelly K.; Veríssimo, Manuela; Coppola, Gabrielle; Roisman, Glenn I.
2015-01-01
There is increasing evidence that attachment representations abstracted from childhood experiences with primary caregivers are organized as a cognitive script describing secure base use and support (i.e., the "secure base script"). To date, however, the latent structure of secure base script knowledge has gone unexamined--this despite…
Computer Sciences and Data Systems, volume 2
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.
Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.
Astronomy and Disabled: Implementation of new technologies to communicate science to new audiences
NASA Astrophysics Data System (ADS)
García, Beatriz; Ortiz Gil, Amelia; Proust, Dominique
2015-08-01
Commission 46 proposed in 2012 the creation of an interdisciplinary WG in which astronomers work together with technicians, educators and disability specialists to develop new teaching and learning strategies devoted o generate resources of high impact among disabled populations, which are usually away from astronomy. Successful initiatives designed to research the best-practices in using new technologies to communicate science in these special audiences include the creation of models and applications, and the implementation of a data base of didactic approaches and tools. Between the achievements of this proposal, we have original development in: design of electronics, design of original software, scripts and music for Planetarium functions, design of models and their associated explanatory script, printed material in Braille and 3D, filming associated with sign language, interviews and docs recompilation and the recently project on the Sign Language Universal Encyclopedic Dictionary, based on the proposal by Proust (2009) and, which proposes the dissemination of a unique language for the deaf worldwide, associated with astronomical terms.We present, on behalf of the WG, some of the achievements, developments, successful stories of recent applications of this new approach to the science for all, thinking in the new “public of sciences”, and new challenges.
Earth Orbiter 1: Wideband Advanced Recorder and Processor (WARP)
NASA Technical Reports Server (NTRS)
Smith, Terry; Kessler, John
1999-01-01
An advanced on-board spacecraft data system component is presented. The component is computer-based and provides science data acquisition, processing, storage, and base-band transmission functions. Specifically, the component is a very high rate solid state recorder, serving as a pathfinder for achieving the data handling requirements of next-generation hyperspectral imaging missions.
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195
NASA Astrophysics Data System (ADS)
Pack, Robert T.; Saunders, David; Fullmer, Rees; Budge, Scott
2006-05-01
USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor that automatically generates scan commands during flight simulations. The scan generation depends on specified flight trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2) various scene and trajectories associated with particular maneuvers or missions.
Charming Users into Scripting CIAO with Python
NASA Astrophysics Data System (ADS)
Burke, D. J.
2011-07-01
The Science Data Systems group of the Chandra X-ray Center provides a number of scripts and Python modules that extend the capabilities of CIAO. Experience in converting the existing scripts—written in a variety of languages such as bash, csh/tcsh, Perl and S-Lang—to Python, and conversations with users, led to the development of the ciao_contrib.runtool module. This allows users to easily run CIAO tools from Python scripts, and utilizes the metadata provided by the parameter-file system to create an API that provides the flexibility and safety guarantees of the command-line. The module is provided to the user community and is being used within our group to create new scripts.
ERIC Educational Resources Information Center
Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios
2016-01-01
The results presented in this paper contribute to research on two different areas of teaching methods: distributed pair programming (DPP) and computer-supported collaborative learning (CSCL). An evaluation study of a DPP system that supports collaboration scripts was conducted over one semester of a computer science course. Seventy-four students…
Earth-Base: A Free And Open Source, RESTful Earth Sciences Platform
NASA Astrophysics Data System (ADS)
Kishor, P.; Heim, N. A.; Peters, S. E.; McClennen, M.
2012-12-01
This presentation describes the motivation, concept, and architecture behind Earth-Base, a web-based, RESTful data-management, analysis and visualization platform for earth sciences data. Traditionally web applications have been built directly accessing data from a database using a scripting language. While such applications are great at bring results to a wide audience, they are limited in scope to the imagination and capabilities of the application developer. Earth-Base decouples the data store from the web application by introducing an intermediate "data application" tier. The data application's job is to query the data store using self-documented, RESTful URIs, and send the results back formatted as JavaScript Object Notation (JSON). Decoupling the data store from the application allows virtually limitless flexibility in developing applications, both web-based for human consumption or programmatic for machine consumption. It also allows outside developers to use the data in their own applications, potentially creating applications that the original data creator and app developer may not have even thought of. Standardized specifications for URI-based querying and JSON-formatted results make querying and developing applications easy. URI-based querying also allows utilizing distributed datasets easily. Companion mechanisms for querying data snapshots aka time-travel, usage tracking and license management, and verification of semantic equivalence of data are also described. The latter promotes the "What You Expect Is What You Get" (WYEIWYG) principle that can aid in data citation and verification.
Kiesewetter, Jan; Kollar, Ingo; Fernandez, Nicolas; Lubarsky, Stuart; Kiessling, Claudia; Fischer, Martin R; Charlin, Bernard
2016-09-01
Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: "illness scripts" and "internal/external collaboration scripts". Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. "Internal collaboration scripts" refer to an individual's knowledge about how to interact with others in a social situation. "External collaboration scripts" are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with "external collaboration scripts". Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
Scalable Molecular Dynamics with NAMD
Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus
2008-01-01
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654
ERIC Educational Resources Information Center
Gijlers, Hannie; Weinberger, Armin; van Dijk, Alieke Mattia; Bollen, Lars; van Joolingen, Wouter
2013-01-01
Creating shared representations can foster knowledge acquisition by elementary school students by promoting active integration and translation of new information. In this study, we investigate to what extent awareness support and scripting facilitate knowledge construction and discourse quality of elementary school students (n?=?94) in a…
"The Best App Is the Teacher" Introducing Classroom Scripts in Technology-Enhanced Education
ERIC Educational Resources Information Center
Montrieux, H.; Raes, A.; Schellens, T.
2017-01-01
A quasi-experimental study was set up in secondary education to study the role of teachers while implementing tablet devices in science education. Three different classroom scripts that guided students and teachers' actions during the intervention on two social planes (group and classroom level) are compared. The main goal was to investigate which…
Fault-Tolerant, Real-Time, Multi-Core Computer System
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2012-01-01
A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.
TOGA - A GNSS Reflections Instrument for Remote Sensing Using Beamforming
NASA Technical Reports Server (NTRS)
Esterhuizen, S.; Meehan, T. K.; Robison, D.
2009-01-01
Remotely sensing the Earth's surface using GNSS signals as bi-static radar sources is one of the most challenging applications for radiometric instrument design. As part of NASA's Instrument Incubator Program, our group at JPL has built a prototype instrument, TOGA (Time-shifted, Orthometric, GNSS Array), to address a variety of GNSS science needs. Observing GNSS reflections is major focus of the design/development effort. The TOGA design features a steerable beam antenna array which can form a high-gain antenna pattern in multiple directions simultaneously. Multiple FPGAs provide flexible digital signal processing logic to process both GPS and Galileo reflections. A Linux OS based science processor serves as experiment scheduler and data post-processor. This paper outlines the TOGA design approach as well as preliminary results of reflection data collected from test flights over the Pacific ocean. This reflections data demonstrates observation of the GPS L1/L2C/L5 signals.
Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology
NASA Astrophysics Data System (ADS)
Lozano Bahilo, Julio; Pierre Auger Collaboration
2012-06-01
When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).
Waters, Theodore E A; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S
2015-08-01
Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present 2 studies and provide 3 critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from 2 western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. (c) 2015 APA, all rights reserved).
Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.
2015-01-01
Recent work examining the content and organization of attachment representations suggests that one way in which we represent the attachment relationship is in the form of a cognitive script. That said, this work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present two studies and provide three critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from two western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. PMID:26147774
Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation
NASA Astrophysics Data System (ADS)
Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.
2017-10-01
A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.
JPRS Report, Science & Technology, China, High-Performance Computer Systems
1992-10-28
microprocessor array The microprocessor array in the AP85 system is com- posed of 16 completely identical array element micro - processors . Each array element...microprocessors and capable of host machine reading and writing. The memory capacity of the array element micro - processors as a whole can be expanded...transmission functions to carry out data transmission from array element micro - processor to array element microprocessor, from array element
NASA Technical Reports Server (NTRS)
Steck, Daniel
2009-01-01
This report documents the generation of an outbound Earth to Moon transfer preliminary database consisting of four cases calculated twice a day for a 19 year period. The database was desired as the first step in order for NASA to rapidly generate Earth to Moon trajectories for the Constellation Program using the Mission Assessment Post Processor. The completed database was created running a flight trajectory and optimization program, called Copernicus, in batch mode with the use of newly created Matlab functions. The database is accurate and has high data resolution. The techniques and scripts developed to generate the trajectory information will also be directly used in generating a comprehensive database.
Advanced crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.
1975-01-01
The development of an operational computer program, the Procedures and Performance Program (PPP), is reported which provides a procedures recording and crew/vehicle performance monitoring capability. The PPP provides real time CRT displays and postrun hardcopy of procedures, difference procedures, performance, performance evaluation, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data, and via magnetic tape transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP.
NASA Astrophysics Data System (ADS)
Larour, Eric; Cheng, Daniel; Perez, Gilberto; Quinn, Justin; Morlighem, Mathieu; Duong, Bao; Nguyen, Lan; Petrie, Kit; Harounian, Silva; Halkides, Daria; Hayes, Wayne
2017-12-01
Earth system models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre- and post-process results (i.e., MATLAB, Python). This results in models that are hard to use for non-specialists and are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs by developing a new JavaScript application program interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows cryosphere scientists to run ISSM on the client side of a web page within the JavaScript environment. When combined with a web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) website (http://vesl.jpl.nasa.gov, VESL(2017)).
Spacewire on Earth orbiting scatterometers
NASA Technical Reports Server (NTRS)
Bachmann, Alex; Lang, Minh; Lux, James; Steffke, Richard
2002-01-01
The need for a high speed, reliable and easy to implement communication link has led to the development of a space flight oriented version of IEEE 1355 called SpaceWire. SpaceWire is based on high-speed (200 Mbps) serial point-to-point links using Low Voltage Differential Signaling (LVDS). SpaceWIre has provisions for routing messages between a large network of processors, using wormhole routing for low overhead and latency. {additionally, there are available space qualified hybrids, which provide the Link layer to the user's bus}. A test bed of multiple digital signal processor breadboards, demonstrating the ability to meet signal processing requirements for an orbiting scatterometer has been implemented using three Astrium MCM-DSPs, each breadboard consists of a Multi Chip Module (MCM) that combines a space qualified Digital Signal Processor and peripherals, including IEEE-1355 links. With the addition of appropriate physical layer interfaces and software on the DSP, the SpaceWire link is used to communicate between processors on the test bed, e.g. sending timing references, commands, status, and science data among the processors. Results are presented on development issues surrounding the use of SpaceWire in this environment, from physical layer implementation (cables, connectors, LVDS drivers) to diagnostic tools, driver firmware, and development methodology. The tools, methods, and hardware, software challenges and preliminary performance are investigated and discussed.
Waters, Theodore E. A.; Ruiz, Sarah K.; Roisman, Glenn I.
2016-01-01
Increasing evidence suggests that attachment representations take at least two forms—a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample, and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Further, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. PMID:27302650
Vaughn, Brian E.; Waters, Theodore E. A.; Steele, Ryan D.; Roisman, Glenn I.; Bost, Kelly K.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn
2016-01-01
Although attachment theory claims that early attachment representations reflecting the quality of the child’s “lived experiences” are maintained across developmental transitions, evidence that has emerged over the last decade suggests that the association between early relationship quality and adolescents’ attachment representations is fairly modest in magnitude. We used aspects of parenting beyond sensitivity over childhood and adolescence and early security to predict adolescents’ scripted attachment representations. At age 18 years, 673 participants from the NICHD Study of Early Child Care and Youth Development (SECCYD) completed the Attachment Script Assessment (ASA) from which we derived an assessment of secure base script knowledge. Measures of secure base support from childhood through age 15 years (e.g., parental monitoring of child activity, father presence in the home) were selected as predictors and accounted for an additional 8% of the variance in secure base script knowledge scores above and beyond direct observations of sensitivity and early attachment status alone, suggesting that adolescents’ scripted attachment representations reflect multiple domains of parenting. Cognitive and demographic variables also significantly increased predicted variance in secure base script knowledge by 2% each. PMID:27032953
Echelle Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Clayton, Martin
This document is the first version of the Starlink Echelle Data Reduction Cookbook. It contains scripts and procedures developed by regular or heavy users of the existing software packages. These scripts are generally of two types; templates which readers may be able to modify to suit their particular needs and utilities which carry out a particular common task and can probably be used `off-the-shelf'. In the nature of this subject the recipes given are quite strongly tied to the software packages, rather than being science-data led. The major part of this document is divided into two sections dealing with scripts to be used with IRAF and with Starlink software (SUN/1).
Automating tasks in protein structure determination with the clipper python module.
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M; Hoh, Soon Wen; Jenkins, Huw T; Dodson, Eleanor; Cowtan, Kevin; Agirre, Jon
2018-01-01
Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine-independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo-microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Examining Classroom Interactions Related to Difference in Students' Science Achievement.
ERIC Educational Resources Information Center
Zady, Madelon F.; Portes, Pedro R.; Ochs, V. Dan
2003-01-01
Examines the cognitive supports that underlie achievement in science using a cultural historical framework and the activity setting (AS) construct with five features: personnel, motivation, scripts, task demands, and beliefs. Reports four emergent phenomena--science activities, the building of learning, meaning in lessons, and the conflict over…
Fermilab Science Education Office
on the Education Server about Science Education, but turn on JavaScript to enable all this site's - About - FAQ - Fermilab Friends - Fermilab Home Fermilab Office of Education & Public Outreach @fnal.gov Lederman Science Education Center Fermilab MS 777 Box 500 Batavia, IL 60510 (630) 840-8258 * fax
Spontaneous Emergence of Legibility in Writing Systems: The Case of Orientation Anisotropy.
Morin, Olivier
2018-03-01
Cultural forms are constrained by cognitive biases, and writing is thought to have evolved to fit basic visual preferences, but little is known about the history and mechanisms of that evolution. Cognitive constraints have been documented for the topology of script features, but not for their orientation. Orientation anisotropy in human vision, as revealed by the oblique effect, suggests that cardinal (vertical and horizontal) orientations, being easier to process, should be overrepresented in letters. As this study of 116 scripts shows, the orientation of strokes inside written characters massively favors cardinal directions, and it is organized in such a way as to make letter recognition easier: Cardinal and oblique strokes tend not to mix, and mirror symmetry is anisotropic, favoring vertical over horizontal symmetry. Phylogenetic analyses and recently invented scripts show that cultural evolution over the last three millennia cannot be the sole cause of these effects. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
SeqBox: RNAseq/ChIPseq reproducible analysis on a consumer game computer.
Beccuti, Marco; Cordero, Francesca; Arigoni, Maddalena; Panero, Riccardo; Amparore, Elvio G; Donatelli, Susanna; Calogero, Raffaele A
2018-03-01
Short reads sequencing technology has been used for more than a decade now. However, the analysis of RNAseq and ChIPseq data is still computational demanding and the simple access to raw data does not guarantee results reproducibility between laboratories. To address these two aspects, we developed SeqBox, a cheap, efficient and reproducible RNAseq/ChIPseq hardware/software solution based on NUC6I7KYK mini-PC (an Intel consumer game computer with a fast processor and a high performance SSD disk), and Docker container platform. In SeqBox the analysis of RNAseq and ChIPseq data is supported by a friendly GUI. This allows access to fast and reproducible analysis also to scientists with/without scripting experience. Docker container images, docker4seq package and the GUI are available at http://www.bioinformatica.unito.it/reproducibile.bioinformatics.html. beccuti@di.unito.it. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Besnier, Francois; Glover, Kevin A.
2013-01-01
This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012
Waters, Theodore E A; Ruiz, Sarah K; Roisman, Glenn I
2017-01-01
Increasing evidence suggests that attachment representations take at least two forms: a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Furthermore, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Processor Units Reduce Satellite Construction Costs
NASA Technical Reports Server (NTRS)
2014-01-01
As part of the effort to build the Fast Affordable Science and Technology Satellite (FASTSAT), Marshall Space Flight Center developed a low-cost telemetry unit which is used to facilitate communication between a satellite and its receiving station. Huntsville, Alabama-based Orbital Telemetry Inc. has licensed the NASA technology and is offering to install the cost-cutting units on commercial satellites.
NASA Tech Briefs, October 2008
NASA Technical Reports Server (NTRS)
2008-01-01
Topics covered include: Control Architecture for Robotic Agent Command and Sensing; Algorithm for Wavefront Sensing Using an Extended Scene; CO2 Sensors Based on Nanocrystalline SnO2 Doped with CuO; Improved Airborne System for Sensing Wildfires; VHF Wide-Band, Dual-Polarization Microstrip-Patch Antenna; Onboard Data Processor for Change-Detection Radar Imaging; Using LDPC Code Constraints to Aid Recovery of Symbol Timing; System for Measuring Flexing of a Large Spaceborne Structure; Integrated Formation Optical Communication and Estimation System; Making Superconducting Welds between Superconducting Wires; Method for Thermal Spraying of Coatings Using Resonant-Pulsed Combustion; Coating Reduces Ice Adhesion; Hybrid Multifoil Aerogel Thermal Insulation; SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software; Mars Image Collection Mosaic Builder; Providing Internet Access to High-Resolution Mars Images; Providing Internet Access to High-Resolution Lunar Images; Expressions Module for the Satellite Orbit Analysis Program Virtual Satellite; Small-Body Extensions for the Satellite Orbit Analysis Program (SOAP); Scripting Module for the Satellite Orbit Analysis Program (SOAP); XML-Based SHINE Knowledge Base Interchange Language; Core Technical Capability Laboratory Management System; MRO SOW Daily Script; Tool for Inspecting Alignment of Twinaxial Connectors; An ATP System for Deep-Space Optical Communication; Polar Traverse Rover Instrument; Expert System Control of Plant Growth in an Enclosed Space; Detecting Phycocyanin-Pigmented Microbes in Reflected Light; DMAC and NMP as Electrolyte Additives for Li-Ion Cells; Mass Spectrometer Containing Multiple Fixed Collectors; Waveguide Harmonic Generator for the SIM; Whispering Gallery Mode Resonator with Orthogonally Reconfigurable Filter Function; Stable Calibration of Raman Lidar Water-Vapor Measurements; Bimaterial Thermal Compensators for WGM Resonators; Root Source Analysis/ValuStream[Trade Mark] - A Methodology for Identifying and Managing Risks; Ensemble: an Architecture for Mission-Operations Software; Object Recognition Using Feature-and Color-Based Methods; On-Orbit Multi-Field Wavefront Control with a Kalman Filter; and The Interplanetary Overlay Networking Protocol Accelerator.
PsyScript: a Macintosh application for scripting experiments.
Bates, Timothy C; D'Oliveiro, Lawrence
2003-11-01
PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.
Science Education at Fermilab Program Search
JavaScript is Turned Off or Not Supported in Your Browser. To search for programs go to the Non -Javascript Search or turn on Javascript and reload this page. Programs | Science Adventures | Calendar | Undergraduates Fermilab Ed Site Search Google Custom Search Programs: Introducing You to the World of Science
ERIC Educational Resources Information Center
Seiler, Gale; Abraham, Anjali
2009-01-01
Conscientization involves a recursive process of reflection and action toward individual and social transformation. Often this process takes shape through encounters in/with diverse and often conflicting discourses. The study of student and teacher discourses, or scripts and counterscripts, in science classrooms can reveal asymmetrical power…
Huth-Bocks, Alissa C.; Muzik, Maria; Beeghly, Marjorie; Earls, Lauren; Stacks, Ann M.
2015-01-01
There is growing evidence that ‘secure-base scripts’ (Waters & Waters, 2006) are an important part of the cognitive underpinnings of internal working models of attachment. Recent research in middle class samples has shown that secure-base scripts are linked to maternal attachment-oriented behavior and child outcomes. However, little is known about the correlates of secure base scripts in higher-risk samples. Participants in the current study included 115 mothers who were oversampled for childhood maltreatment and their infants. Results revealed that a higher level of secure base scriptedness was significantly related to more positive and less negative maternal parenting in both unstructured free play and structured teaching contexts, and to higher reflective functioning scores on the Parent Development Interview-Revised Short Form (Slade, Aber, Berger, Bresgi, & Kaplan, 2003). Associations with parent-child secure base scripts, specifically, indicate some level of relationship-specificity in attachment scripts. Many, but not all, significant associations remained after controlling for family income and maternal age. Findings suggest that assessing secure base scripts among mothers known to be at risk for parenting difficulties may be important for interventions aimed at altering problematic parental representations and caregiving behavior. PMID:25319230
Internal and External Scripts in Computer-Supported Collaborative Inquiry Learning
ERIC Educational Resources Information Center
Kollar, Ingo; Fischer, Frank; Slotta, James D.
2007-01-01
We investigated how differently structured external scripts interact with learners' internal scripts with respect to individual knowledge acquisition in a Web-based collaborative inquiry learning environment. Ninety students from two secondary schools participated. Two versions of an external collaboration script (high vs. low structured)…
ESDAPT - APT PROGRAMMING EDITOR AND INTERPRETER
NASA Technical Reports Server (NTRS)
Premack, T.
1994-01-01
ESDAPT is a graphical programming environment for developing APT (Automatically Programmed Tool) programs for controlling numerically controlled machine tools. ESDAPT has a graphical user interface that provides the user with an APT syntax sensitive text editor and windows for displaying geometry and tool paths. APT geometry statement can also be created using menus and screen picks. ESDAPT interprets APT geometry statements and displays the results in its view windows. Tool paths are generated by batching the APT source to an APT processor (COSMIC P-APT recommended). The tool paths are then displayed in the view windows. Hardcopy output of the view windows is in color PostScript format. ESDAPT is written in C-language, yacc, lex, and XView for use on Sun4 series computers running SunOS. ESDAPT requires 4Mb of disk space, 7Mb of RAM, and MIT's X Window System, Version 11 Release 4, or OpenWindows version 3 for execution. Program documentation in PostScript format and an executable for OpenWindows version 3 are provided on the distribution media. The standard distribution medium for ESDAPT is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992.
HypsIRI On-Board Science Data Processing
NASA Technical Reports Server (NTRS)
Flatley, Tom
2010-01-01
Topics include On-board science data processing, on-board image processing, software upset mitigation, on-board data reduction, on-board 'VSWIR" products, HyspIRI demonstration testbed, and processor comparison.
Multi-Scale Characterization of Orthotropic Microstructures
2008-04-01
D. Valiveti, S. J. Harris, J. Boileau, A domain partitioning based pre-processor for multi-scale modelling of cast aluminium alloys , Modelling and...SUPPLEMENTARY NOTES Journal article submitted to Modeling and Simulation in Materials Science and Engineering. PAO Case Number: WPAFB 08-3362...element for charac- terization or simulation to avoid misleading predictions of macroscopic defor- mation, fracture, or transport behavior. Likewise
Catch the A-Train from the NASA GIBS/Worldview Platform
NASA Astrophysics Data System (ADS)
Schmaltz, J. E.; Alarcon, C.; Baynes, K.; Boller, R. A.; Cechini, M. F.; De Cesare, C.; De Luca, A. P.; Gunnoe, T.; King, B. A.; King, J.; Pressley, N. N.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Wong, M. M.
2016-12-01
The satellites and instruments of the Afternoon Train are providing an unprecedented combination of nearly simultaneous measurements. One of the challenges for researchers and applications users is to sift through these combinations to find particular sets of data that correspond to their interests. Using visualization of the data is one way to explore these combinations. NASA's Worldview tool is designed to do just that - to interactively browse full-resolution satellite imagery. Worldview (https://worldview.earthdata.nasa.gov/) is web-based and developed using open libraries and standards (OpenLayers, JavaScript, CSS, HTML) for cross-platform compatibility. It addresses growing user demands for access to full-resolution imagery by providing a responsive, interactive interface with global coverage and no artificial boundaries. In addition to science data imagery, Worldview provides ancillary datasets such as coastlines and borders, socio-economic layers, and satellite orbit tracks. Worldview interacts with the Earthdata Search Client to provide download of the data files associated with the imagery being viewed. The imagery used by Worldview is provided NASA's Global Imagery Browse Services (GIBS - https://earthdata.nasa.gov/gibs) which provide highly responsive, highly scalable imagery services. Requests are made via the OGC Web Map Tile Service (WMTS) standard. In addition to Worldview, other clients can be developed using a variety of web-based libraries, desktop and mobile app libraries, and GDAL script-based access. GIBS currently includes more than 106 science data sets from seven instruments aboard three of the A-Train satellites and new data sets are being added as part of the President's Big Earth Data Initiative (BEDI). Efforts are underway to include new imagery types, such as vectors and curtains, into Worldview/GIBS which will be used to visualize additional A-Train science parameters.
[Medical practice, magic and religion - conjunction and development before and after Reformation].
Thorvardardottir, Olina Kjerulf
2017-12-01
The conjunction between medical practice, religion and magic becomes rather visible when one peers into old scripts and ancient literature. Before the foundation and diffusion of universities of the continent, the european convents and cloisters were the centers of medical knowl-edge and -practice for centuries. Alongside the scholarly development of medical science, driven from the roots of the eldest scholarly medicial practice, the practice of folk-medicin flourished and thrived all over Europe, not least the herbal-medicine which is the original form and foundation for modern pharmacy. This article deals with the conjunction of religion, magic and medical practice in ancient Icelandic sources such as the Old-Norse literature, medical-scripts from the 12th - 15th century Iceland, and not least the Icelandic magical-scripts (galdrakver) of the 17th century. The last mentioned documents were used as evidence in several witch-trials that led convicted witches to suffer executions at the stake once the wave of European witch-persecutions had rushed ashore in 17th century Iceland. These sources indicate a decline of medical knowledge and science in the 16th and 17th century Iceland, the medical practice being rather undeveloped at the time - in Iceland as in other parts of Europe - there-fore a rather unclear margin between "the learned and the laymen". While common people and folk-healers were convicted as witches to suffer at the stake for possession of magical scripts and healing-books, some scholars of the state of Danmark were practicing healing-methods that deserve to be compared to the activities of the former ones. That comparison raises an inevitable question of where to draw the line between the learned medical man and the magician of 17th century Iceland, that is between Magic and Science.
Gender differences in performance of script analysis by older adults.
Helmes, E; Bush, J D; Pike, D L; Drake, D G
2006-12-01
Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical scripts. Results were not significant but a tendency was present, with genders making fewer impossible errors on the gender-typical script. Such an interaction was also noted in Study 2, which contrasted 50 older with 50 younger adults on three scripts, including a script with neutral familiarity. The pattern of significant interactions for errors suggested the need to use scripts that are based upon tasks that are equally familiar to both genders.
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.
The development of videos in culturally grounded drug prevention for rural native Hawaiian youth.
Okamoto, Scott K; Helm, Susana; McClain, Latoya L; Dinson, Ay-Laina
2012-12-01
The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the National Institute on Drug Abuse. Seventy-four middle- and high-school-aged youth in 15 focus groups adapted and validated the details of the scripts to make them more realistic. Specifically, youth participants affirmed the situations described in the scripts and suggested changes to details of the scripts to make them more culturally specific. Suggested changes to the scripts also reflected preferred drug resistance strategies described in prior research, and varied based on the type of drug offerer described in each script (i.e., peer/friend, parent, or cousin/sibling). Implications for culturally grounded drug prevention are discussed.
Trial-Based Functional Analysis Informs Treatment for Vocal Scripting.
Rispoli, Mandy; Brodhead, Matthew; Wolfe, Katie; Gregori, Emily
2018-05-01
Research on trial-based functional analysis has primarily focused on socially maintained challenging behaviors. However, procedural modifications may be necessary to clarify ambiguous assessment results. The purposes of this study were to evaluate the utility of iterative modifications to trial-based functional analysis on the identification of putative reinforcement and subsequent treatment for vocal scripting. For all participants, modifications to the trial-based functional analysis identified a primary function of automatic reinforcement. The structure of the trial-based format led to identification of social attention as an abolishing operation for vocal scripting. A noncontingent attention treatment was evaluated using withdrawal designs for each participant. This noncontingent attention treatment resulted in near zero levels of vocal scripting for all participants. Implications for research and practice are presented.
Arabic Script and the Rise of Arabic Calligraphy
ERIC Educational Resources Information Center
Alshahrani, Ali A.
2008-01-01
The aim of this paper is to present a concise coherent literature review of the Arabic Language script system as one of the oldest living Semitic languages in the world. The article discusses in depth firstly, Arabic script as a phonemic sound-based writing system of twenty eight, right to left cursive script where letterforms shaped by their…
Scripted Collaboration and Group-Based Variations in a Higher Education CSCL Context
ERIC Educational Resources Information Center
Hamalainen, Raija; Arvaja, Maarit
2009-01-01
Scripting student activities is one way to make Computer-Supported Collaborative Learning more efficient. This case study examines how scripting guided student group activities and also how different groups interpreted the script; what kinds of roles students adopted and what kinds of differences there were between the groups in terms of their…
Computer-Based Script Training for Aphasia: Emerging Themes from Post-Treatment Interviews
ERIC Educational Resources Information Center
Cherney, Leora R.; Halper, Anita S.; Kaye, Rosalind C.
2011-01-01
This study presents results of post-treatment interviews following computer-based script training for persons with chronic aphasia. Each of the 23 participants received 9 weeks of AphasiaScripts training. Post-treatment interviews were conducted with the person with aphasia and/or a significant other person. The 23 interviews yielded 584 coded…
Groskreutz, Mark P; Peters, Amy; Groskreutz, Nicole C; Higbee, Thomas S
2015-01-01
Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel script-frame protocol and learned to make play-related comments about toys. After the script-frame protocol, commenting occurred in the absence of scripts, with untrained play activities, and included untrained comments. © Society for the Experimental Analysis of Behavior.
Invisible Mars: New Visuals for Communicating MAVEN's Story
NASA Astrophysics Data System (ADS)
Shupla, C. B.; Ali, N. A.; Jones, A. P.; Mason, T.; Schneider, N. M.; Brain, D. A.; Blackwell, J.
2016-12-01
Invisible Mars tells the story of Mars' evolving atmosphere, through a script and a series of visuals as a live presentation. Created for Science-On-A-Sphere, the presentation has also been made available to planetariums, and is being expanded to other platforms. The script has been updated to include results from the Mars Atmosphere and Volatile Evolution Mission (MAVEN), and additional visuals have been produced. This poster will share the current Invisible Mars resources available and the plans to further disseminate this presentation.
Offline software for the DAMPE experiment
NASA Astrophysics Data System (ADS)
Wang, Chi; Liu, Dong; Wei, Yifeng; Zhang, Zhiyong; Zhang, Yunlong; Wang, Xiaolian; Xu, Zizong; Huang, Guangshun; Tykhonov, Andrii; Wu, Xin; Zang, Jingjing; Liu, Yang; Jiang, Wei; Wen, Sicheng; Wu, Jian; Chang, Jin
2017-10-01
A software system has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite-based experiment. The DAMPE software is mainly written in C++ and steered using a Python script. This article presents an overview of the DAMPE offline software, including the major architecture design and specific implementation for simulation, calibration and reconstruction. The whole system has been successfully applied to DAMPE data analysis. Some results obtained using the system, from simulation and beam test experiments, are presented. Supported by Chinese 973 Program (2010CB833002), the Strategic Priority Research Program on Space Science of the Chinese Academy of Science (CAS) (XDA04040202-4), the Joint Research Fund in Astronomy under cooperative agreement between the National Natural Science Foundation of China (NSFC) and CAS (U1531126) and 100 Talents Program of the Chinese Academy of Science
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Rocket scientist, writer, born in Berlin, Germany. Inspired by reading a work by the space pioneer, HERMANN OBERTH, Ley founded the German Society for Space Travel (1927), enrolled WERNHER VON BRAUN, and helped develop the liquid-fuel rocket. Fled to the USA, and became a science writer, including science fiction and film scripts....
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
ERIC Educational Resources Information Center
Piwowar, Valentina; Barth, Victoria L.; Ophardt, Diemut; Thiel, Felicitas
2018-01-01
Scripted videos are based on a screenplay and are a viable and widely used tool for learning. Yet, reservations exist due to limited authenticity and high production costs. The present paper comprehensively describes a video production process for scripted videos on the topic of student misbehavior in the classroom. In a three step…
JPRS Report, Science & Technology, USSR: Computers, Control Systems and Machines
1989-03-14
optimizatsii slozhnykh sistem (Coding Theory and Complex System Optimization ). Alma-Ata, Nauka Press, 1977, pp. 8-16. 11. Author’s certificate number...Interpreter Specifics [0. I. Amvrosova] ............................................. 141 Creation of Modern Computer Systems for Complex Ecological...processor can be designed to decrease degradation upon failure and assure more reliable processor operation, without requiring more complex software or
Embedded Data Processor and Portable Computer Technology testbeds
NASA Technical Reports Server (NTRS)
Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.
1993-01-01
Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.
An innovative on-board processor for lightsats
NASA Technical Reports Server (NTRS)
Henshaw, R. M.; Ballard, B. W.; Hayes, J. R.; Lohr, D. A.
1990-01-01
The Applied Physics Laboratory (APL) has developed a flightworthy custom microprocessor that increases capability and reduces development costs of lightsat science instruments. This device, called the FRISC (FORTH Reduced Instruction Set Computer), directly executes the high-level language called FORTH, which is ideally suited to the multitasking control and data processing environment of a spaceborne instrument processor. The FRISC will be flown as the onboard processor in the Magnetic Field Experiment on the Freja satllite. APL has achieved a significant increase in onboard processing capability with no increase in cost when compared to the magnetometer instrument on Freja's predecessor, the Viking satellite.
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
NASA Technical Reports Server (NTRS)
Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen
1987-01-01
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.
Kiesewetter, Jan; Fischer, Frank; Fischer, Martin R
2016-01-01
Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual's knowledge about how to interact with others in a social situation. METHOD— Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. RESULTS— Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88) than did novices (M = 13.80; SD = 4.47). Differences in superficial information were not found. Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. METHOD— Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. RESULTS— As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23) than did novices (M = 54.25; SD = 15.01). For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli. There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal collaboration scripts are not easily stored and retrieved in memory tasks other than in the own content domain.
Kiesewetter, Jan; Fischer, Frank; Fischer, Martin R.
2016-01-01
Background Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual’s knowledge about how to interact with others in a social situation. Method—Study 1 Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. Results—Study 1 Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88) than did novices (M = 13.80; SD = 4.47). Differences in superficial information were not found. Study 2 Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. Method—Study 2 Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. Results—Study 2 As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23) than did novices (M = 54.25; SD = 15.01). For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli. Discussion There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal collaboration scripts are not easily stored and retrieved in memory tasks other than in the own content domain. PMID:26866801
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
NASA Technical Reports Server (NTRS)
Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen
2009-01-01
Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007
SCDU Testbed Automated In-Situ Alignment, Data Acquisition and Analysis
NASA Technical Reports Server (NTRS)
Werne, Thomas A.; Wehmeier, Udo J.; Wu, Janet P.; An, Xin; Goullioud, Renaud; Nemati, Bijan; Shao, Michael; Shen, Tsae-Pyng J.; Wang, Xu; Weilert, Mark A.;
2010-01-01
In the course of fulfilling its mandate, the Spectral Calibration Development Unit (SCDU) testbed for SIM-Lite produces copious amounts of raw data. To effectively spend time attempting to understand the science driving the data, the team devised computerized automations to limit the time spent bringing the testbed to a healthy state and commanding it, and instead focus on analyzing the processed results. We developed a multi-layered scripting language that emphasized the scientific experiments we conducted, which drastically shortened our experiment scripts, improved their readability, and all-but-eliminated testbed operator errors. In addition to scientific experiment functions, we also developed a set of automated alignments that bring the testbed up to a well-aligned state with little more than the push of a button. These scripts were written in the scripting language, and in Matlab via an interface library, allowing all members of the team to augment the existing scripting language with complex analysis scripts. To keep track of these results, we created an easily-parseable state log in which we logged both the state of the testbed and relevant metadata. Finally, we designed a distributed processing system that allowed us to farm lengthy analyses to a collection of client computers which reported their results in a central log. Since these logs were parseable, we wrote query scripts that gave us an effortless way to compare results collected under different conditions. This paper serves as a case-study, detailing the motivating requirements for the decisions we made and explaining the implementation process.
Theater in Physics Teacher Education
ERIC Educational Resources Information Center
van den Berg, Ed
2009-01-01
Ten years ago I sat down with the first batch of students in our science/math teacher education program in the Philippines, then third-year students, and asked them what they could do for the opening of the new science building. One of them pulled a stack of papers out of his bag and put it in front of me: a complete script for a science play!…
2017-09-04
10 years @ 90% depth of discharge o Weight – 170 lb/374 kg PV panels: 12 panels with a 3.36 kW solar array capacity Generator: 10 kW TQG...lightweight thin-film PV panels ( solar modules or “ solar blankets”). These solar blankets were Door Sensor Figure 92: Temperature and Humidity Tripod...collected by various PV panels, and charging times for BB2590 batteries. 4.5.2 Operational Script The experimental nano-coated solar panel
Anibamine and its Analogues as Novel Anti-Prostate Cancer Agents
2009-06-01
expression of CCR5 and CCL5 was quantitated by SYBR-based Real-time PCR. The U6 gene was used as internal control . cDNA was synthesized using the iScript...15 Conclusion The major focus of the research pro ject will be the syntheses of the ligands we designed as chem okine receptor CCR5 antagonists with...provide useful information and insights for both basic research and drug design and hence are widely welcome by the science community. More specifically
V.A. I Animal Science Technical Information.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains two units of informational materials and transparency masters, with accompanying scripts, for teachers to use in an animal science course in vocational agriculture. Unit A on breeds and selection of livestock and poultry includes 13 topics covering beef cattle, dairy cattle, swine, horses, goats, sheep, and poultry. Unit B on…
Plant Science. IV-A-1 to IV-F-2. Basic V.A.I.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains six units of informational materials and transparency masters, with accompanying scripts, for teachers to use in a plant science course in vocational agriculture. Designed especially for use in Texas, the first unit introduces the course through the following topics: economic importance of major crops, major areas of…
Ditching the Script: Moving beyond "Automatic Thinking" in Introductory Political Science Courses
ERIC Educational Resources Information Center
Glover, Robert W.; Tagliarina, Daniel
2011-01-01
Political science is a challenging field, particularly when it comes to undergraduate teaching. If we are to engage in something more than uncritical ideological instruction, it demands from the student a willingness to approach alien political ideas with intellectual generosity. Yet, students within introductory classes often harbor inherited…
Static and Current Electricity.
ERIC Educational Resources Information Center
Schlenker, Richard M.; Murtha, Kathy T.
This is a copy of the script for the electrical relationships unit in an auto-tutorial physical science course for non-science majors, offered at the University of Maine at Orono. The unit includes 15 simple experiments designed to allow the student to discover various fundamental electrical relationships. The student has the option of reading the…
An automated process for generating archival data files from MATLAB figures
NASA Astrophysics Data System (ADS)
Wallace, G. M.; Greenwald, M.; Stillerman, J.
2016-10-01
A new directive from the White House Office of Science and Technology Policy requires that all publications supported by federal funding agencies (e.g. Department of Energy Office of Science, National Science Foundation) include machine-readable datasets for figures and tables. An automated script was developed at the PSFC to make this process easier for authors using the MATLAB plotting environment to create figures. All relevant data (x, y, z, errorbars) and metadata (line style, color, symbol shape, labels) are contained within the MATLAB .fig file created when saving a figure. The export_fig script extracts data and metadata from a .fig file and exports it into an HDF5 data file with no additional user input required. Support is included for a number of plot types including 2-D and 3-D line, contour, and surface plots, quiver plots, bar graphs, and histograms. This work supported by US Department of Energy cooperative agreement DE-FC02-99ER54512 using the Alcator C-Mod tokamak, a DOE Office of Science user facility.
KNIME for reproducible cross-domain analysis of life science data.
Fillbrunn, Alexander; Dietz, Christian; Pfeuffer, Julianus; Rahn, René; Landrum, Gregory A; Berthold, Michael R
2017-11-10
Experiments in the life sciences often involve tools from a variety of domains such as mass spectrometry, next generation sequencing, or image processing. Passing the data between those tools often involves complex scripts for controlling data flow, data transformation, and statistical analysis. Such scripts are not only prone to be platform dependent, they also tend to grow as the experiment progresses and are seldomly well documented, a fact that hinders the reproducibility of the experiment. Workflow systems such as KNIME Analytics Platform aim to solve these problems by providing a platform for connecting tools graphically and guaranteeing the same results on different operating systems. As an open source software, KNIME allows scientists and programmers to provide their own extensions to the scientific community. In this review paper we present selected extensions from the life sciences that simplify data exploration, analysis, and visualization and are interoperable due to KNIME's unified data model. Additionally, we name other workflow systems that are commonly used in the life sciences and highlight their similarities and differences to KNIME. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Kiesewetter, Jan; Gluza, Martin; Holzer, Matthias; Saravo, Barbara; Hammitzsch, Laura; Fischer, Martin R
2015-01-01
Collaboration as a key qualification in medical education and everyday routine in clinical care can substantially contribute to improving patient safety. Internal collaboration scripts are conceptualized as organized - yet adaptive - knowledge that can be used in specific situations in professional everyday life. This study examines the level of internalization of collaboration scripts in medicine. Internalization is understood as fast retrieval of script information. The goals of the current study were the assessment of collaborative information, which is part of collaboration scripts, and the development of a methodology for measuring the level of internalization of collaboration scripts in medicine. For the contrastive comparison of internal collaboration scripts, 20 collaborative novices (medical students in their final year) and 20 collaborative experts (physicians with specialist degrees in internal medicine or anesthesiology) were included in the study. Eight typical medical collaborative situations as shown on a photo or video were presented to the participants for five seconds each. Afterwards, the participants were asked to describe what they saw on the photo or video. Based on the answers, the amount of information belonging to a collaboration script (script-information) was determined and the time each participant needed for answering was measured. In order to measure the level of internalization, script-information per recall time was calculated. As expected, collaborative experts stated significantly more script-information than collaborative novices. As well, collaborative experts showed a significantly higher level of internalization. Based on the findings of this research, we conclude that our instrument can discriminate between collaboration novices and experts. It therefore can be used to analyze measures to foster subject-specific competency in medical education.
Types and Characteristics of Fish and Seafood Provisioning Scripts Used by Rural Midlife Adults.
Bostic, Stephanie M; Sobal, Jeffery; Bisogni, Carole A; Monclova, Juliet M
To examine rural New York State consumers' cognitive scripts for fish and seafood provisioning. A cross-sectional design with in-depth, semistructured interviews. Three rural New York State counties. Adults (n = 31) with diverse fish-related experiences were purposefully recruited. Scripts describing fish and seafood acquisition, preparation, and eating out. Interview transcripts were coded for emergent themes using Atlas.ti. Diagrams of scripts for each participant were constructed. Five types of acquisition scripts included quality-oriented, price-oriented, routine, special occasion, and fresh catch. Frequently used preparation scripts included everyday cooking, fast meal, entertaining, and grilling. Scripts for eating out included fish as first choice, Friday outing, convenient meals, special event, and travel meals. Personal values and resources influenced script development. Individuals drew on a repertoire of scripts based on their goals and resources at that time and in that place. Script characteristics of scope, flexibility, and complexity varied widely. Scripts incorporated goals, values, and resources into routine food behaviors. Understanding the characteristics of scripts provided insights about fish provisioning and opportunities to reduce the gap between current intake and dietary guidelines in this rural setting. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Security Primitives for Reconfigurable Hardware-Based Systems
2010-05-01
work, we propose security primitives using ideas centered around the notion of “moats and drawbridges .” The primitives encompass four design properties...Santa Bar- bara, CA 93106; email: sherwood@cs.ucsb.edu; R. Kastner, Department of Computer Science and Engineering , University of California, San...fingerprint reader), the other to control the ethernet IP core—and an AES encryption engine used by both of the processor cores. These cores are all implemented
Analog Processor To Solve Optimization Problems
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.
1993-01-01
Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.
Examining classroom interactions related to difference in students' science achievement
NASA Astrophysics Data System (ADS)
Zady, Madelon F.; Portes, Pedro R.; Ochs, V. Dan
2003-01-01
The current study examines the cognitive supports that underlie achievement in science by using a cultural historical framework (L. S. Vygotsky (1934/1986), Thought and Language, MIT Press, Cambridge, MA.) and the activity setting (AS) construct (R. G. Tharp & R. Gallimore (1988), Rousing minds to life: Teaching, learning and schooling in social context, Cambridge University Press, Cambridge, MA.) with its five features: personnel, motivations, scripts, task demands, and beliefs. Observations were made of the classrooms of seventh-grade science students, 32 of whom had participated in a prior achievement-related parent-child interaction or home study (P. R. Portes, M. F. Zady, & R. M. Dunham (1998), Journal of Genetic Psychology, 159, 163-178). The results of a quantitative analysis of classroom interaction showed two features of the AS: personnel and scripts. The qualitative field analysis generated four emergent phenomena related to the features of the AS that appeared to influence student opportunity for conceptual development. The emergent phenomenon were science activities, the building of learning, meaning in lessons, and the conflict over control. Lastly, the results of the two-part classroom study were compared to those of the home science AS of high and low achievers. Mismatches in the AS features in the science classroom may constrain the opportunity to learn. Educational implications are discussed.
NASA Technical Reports Server (NTRS)
Seale, R. H.
1979-01-01
The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J
2010-03-01
PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.
2010-01-01
Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306
Couple decision making and use of cultural scripts in Malawi.
Mbweza, Ellen; Norr, Kathleen F; McElmurry, Beverly
2008-01-01
To examine the decision-making processes of husband and wife dyads in matrilineal and patrilineal marriage traditions of Malawi in the areas of money, food, pregnancy, contraception, and sexual relations. Qualitative grounded theory using simultaneous interviews of 60 husbands and wives (30 couples). Data were analyzed according to the guidelines of simultaneous data collection and analysis. The analysis resulted in development of core categories and categories of decision-making process. Data matrixes were used to identify similarities and differences within couples and across cases. Most couples reported using a mix of final decision-making approaches: husband-dominated, wife-dominated, and shared. Gender based and nongender based cultural scripts provided rationales for their approaches to decision making. Gender based cultural scripts (husband-dominant and wife-dominant) were used to justify decision-making approaches. Non-gender based cultural scripts (communicating openly, maintaining harmony, and children's welfare) supported shared decision making. Gender based cultural scripts were used in decision making more often among couples from the district with a patrilineal marriage tradition and where the husband had less than secondary school education and was not formally employed. Nongender based cultural scripts to encourage shared decision making can be used in designing culturally tailored reproductive health interventions for couples. Nurses who work with women and families should be aware of the variations that occur in actual couple decision-making approaches. Shared decision making can be used to encourage the involvement of men in reproductive health programs.
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Novel Technology for Treating Individuals with Aphasia and Concomitant Cognitive Deficits
Cherney, Leora R.; Halper, Anita S.
2009-01-01
Purpose This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-the-art computer software for training conversational scripts. Method Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. Results There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of script-related words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Conclusion Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits. PMID:19158062
Collaboration Scripts for Enhancing Metacognitive Self-Regulation and Mathematics Literacy
ERIC Educational Resources Information Center
Chen, Cheng-Huan; Chiu, Chiung-Hui
2016-01-01
This study designed a set of computerized collaboration scripts for multi-touch supported collaborative design-based learning and evaluated its effects on multiple aspects of metacognitive self-regulation in terms of planning and controlling and mathematical literacy achievement at higher and lower levels. The computerized scripts provided a…
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Next Generation Security for the 10,240 Processor Columbia System
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)
2005-01-01
This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.
Personal Inquiry: Orchestrating Science Investigations within and beyond the Classroom
ERIC Educational Resources Information Center
Sharples, Mike; Scanlon, Eileen; Ainsworth, Shaaron; Anastopoulou, Stamatina; Collins, Trevor; Crook, Charles; Jones, Ann; Kerawalla, Lucinda; Littleton, Karen; Mulholland, Paul; O'Malley, Claire
2015-01-01
A central challenge for science educators is to enable young people to act as scientists by gathering and assessing evidence, conducting experiments, and engaging in informed debate. We report the design of the nQuire toolkit, a system to support scripted personal inquiry learning, and a study of its use with school students ages 11-14. This…
Soil Science. III-A-1 to III-D-4. Basic V.A.I.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains four units of informational materials and transparency masters, with accompanying scripts, for teachers to use in a soil science course in vocational agriculture. Designed especially for use in Texas, the first unit discusses the importance of soils. In the second unit, the nature and properties of soils are discussed,…
Development of Improved Modeling and Analysis Techniques for Dynamics of Shell Structures
1991-07-24
Engineering Sciences and Center for Space Structures and Control University of Colorado,Campus Box 429 Boulder, Colorado 80309 Accesion :or -.... ... i...system architecture ; third, to implement a decomposi- tion/mapping procedure that matches as far as possible the layout of the processors to the...element computations. In particular. we address issues that are related to the processor memory size. to the SIMD architecture and to the fast
2004-07-01
steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate
Processor Capacity Reserves for Multimedia Operating Systems
1993-05-01
Stefan Savage, and -ideyuki Tokuda May 1993 CMU-CS-93-157 School of Computer Science Camegie Mellon University Pittsburgh, PA 15213 Abstract Multimedia...and provide feedback so that the estimate can be adjusted if necessaty . For non-periodic activities that are to be limited by a processor percentage...comments and suggestions: Brian Bershad, Ragunathan Rajkumar, and the members of the ART group and Mach group at CMU. 13 References [1] D. P
Automatic script identification from images using cluster-based templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Kerns, L.; Kelly, P.
We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less
Near-line Archive Data Mining at the Goddard Distributed Active Archive Center
NASA Astrophysics Data System (ADS)
Pham, L.; Mack, R.; Eng, E.; Lynnes, C.
2002-12-01
NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Script identification from images using cluster-based templates
Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.
1998-12-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.
Script identification from images using cluster-based templates
Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.
1998-01-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
NASA Astrophysics Data System (ADS)
Liang, Cunren; Zeng, Qiming; Jia, Jianying; Jiao, Jian; Cui, Xi'ai
2013-02-01
Scanning synthetic aperture radar (ScanSAR) mode is an efficient way to map large scale geophysical phenomena at low cost. The work presented in this paper is dedicated to ScanSAR interferometric processing and its implementation by making full use of existing standard interferometric synthetic aperture radar (InSAR) software. We first discuss the properties of the ScanSAR signal and its phase-preserved focusing using the full aperture algorithm in terms of interferometry. Then a complete interferometric processing flow is proposed. The standard ScanSAR product is decoded subswath by subswath with burst gaps padded with zero-pulses, followed by a Doppler centroid frequency estimation for each subswath and a polynomial fit of all of the subswaths for the whole scene. The burst synchronization of the interferometric pair is then calculated, and only the synchronized pulses are kept for further interferometric processing. After the complex conjugate multiplication of the interferometric pair, the residual non-integer pulse repetition interval (PRI) part between adjacent bursts caused by zero padding is compensated by resampling using a sinc kernel. The subswath interferograms are then mosaicked, in which a method is proposed to remove the subswath discontinuities in the overlap area. Then the following interferometric processing goes back to the traditional stripmap processing flow. A processor written with C and Fortran languages and controlled by Perl scripts is developed to implement these algorithms and processing flow based on the JPL/Caltech Repeat Orbit Interferometry PACkage (ROI_PAC). Finally, we use the processor to process ScanSAR data from the Envisat and ALOS satellites and obtain large scale deformation maps in the radar line-of-sight (LOS) direction.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
Background When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. Objectives To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. Methods A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. Results The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Conclusions Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction. PMID:24349257
Script-like attachment representations in dreams containing current romantic partners.
Selterman, Dylan; Apetroaia, Adela; Waters, Everett
2012-01-01
Recent research has demonstrated parallels between romantic attachment styles and general dream content. The current study examined partner-specific attachment representations alongside dreams that contained significant others. The general prediction was that dreams would follow the "secure base script," and a general correspondence would emerge between secure attachment cognitions in waking life and in dreams. Sixty-one undergraduate student participants in committed dating relationships of six months duration or longer completed the Secure Base Script Narrative Assessment at Time 1, and then completed a dream diary for 14 consecutive days. Blind coders scored dreams that contained significant others using the same criteria for secure base content in laboratory narratives. Results revealed a significant association between relationship-specific attachment security and the degree to which dreams about romantic partners followed the secure base script. The findings illuminate our understanding of mental representations with regards to specific attachment figures. Implications for attachment theory and clinical applications are discussed.
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
A High Efficiency System for Science Instrument Commanding for the Mars Global Surveyor Mission
NASA Technical Reports Server (NTRS)
Jr., R. N. Brooks
1995-01-01
The Mars Global Surveyor (MGS) mission will return to Mars to re- cover most of the science lost when the ill fated Mars Observer space- craft suffered a catastrophic anomaly in its propulsion system and did not go into orbit. Described in detail are the methods employed by the MGS Sequence Team to accelerate science command processing by using standard command generation process and standard UNIX control scripts.
Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2015-06-01
Web-based research is becoming ubiquitous in the behavioral sciences, facilitated by convenient, readily available participant pools and relatively straightforward ways of running experiments: most recently, through the development of the HTML5 standard. Although in most studies participants give untimed responses, there is a growing interest in being able to record response times online. Existing data on the accuracy and cross-machine variability of online timing measures are limited, and generally they have compared behavioral data gathered on the Web with similar data gathered in the lab. For this article, we took a more direct approach, examining two ways of running experiments online-Adobe Flash and HTML5 with CSS3 and JavaScript-across 19 different computer systems. We used specialist hardware to measure stimulus display durations and to generate precise response times to visual stimuli in order to assess measurement accuracy, examining effects of duration, browser, and system-to-system variability (such as across different Windows versions), as well as effects of processing power and graphics capability. We found that (a) Flash and JavaScript's presentation and response time measurement accuracy are similar; (b) within-system variability is generally small, even in low-powered machines under high load; (c) the variability of measured response times across systems is somewhat larger; and (d) browser type and system hardware appear to have relatively small effects on measured response times. Modeling of the effects of this technical variability suggests that for most within- and between-subjects experiments, Flash and JavaScript can both be used to accurately detect differences in response times across conditions. Concerns are, however, noted about using some correlational or longitudinal designs online.
ACME Priority Metrics (A-PRIME)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Zender, Charlie; Van Roekel, Luke
A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.
Food safety considerations for innovative nutrition solutions.
Byrd-Bredbenner, Carol; Cohn, Marjorie Nolan; Farber, Jeffrey M; Harris, Linda J; Roberts, Tanya; Salin, Victoria; Singh, Manpreet; Jaferi, Azra; Sperber, William H
2015-07-01
Failure to secure safe and affordable food to the growing global population leads far too often to disastrous consequences. Among specialists and other individuals, food scientists have a key responsibility to improve and use science-based tools to address risk and advise food handlers and manufacturers with best-practice recommendations. With collaboration from production agriculture, food processors, state and federal agencies, and consumers, it is critical to implement science-based strategies that address food safety and that have been evaluated for effectiveness in controlling and/or eliminating hazards. It is an open question whether future food safety concerns will shift in priority given the imperatives to supply sufficient food. This report brings together leading food safety experts to address these issues with a focus on three areas: economic, social, and policy aspects of food safety; production and postharvest technology for safe food; and innovative public communication for food safety and nutrition. © 2015 New York Academy of Sciences.
Earth Sciences Requirements for the Information Sciences Experiment System
NASA Technical Reports Server (NTRS)
Bowker, David E. (Editor); Katzberg, Steve J. (Editor); Wilson, R. Gale (Editor)
1990-01-01
The purpose of the workshop was to further explore and define the earth sciences requirements for the Information Sciences Experiment System (ISES), a proposed onboard data processor with real-time communications capability intended to support the Earth Observing System (Eos). A review of representative Eos instrument types is given and a preliminary set of real-time data needs has been established. An executive summary is included.
NASA Astrophysics Data System (ADS)
Stender, Anita; Brückmann, Maja; Neumann, Knut
2017-08-01
This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson planning: When planning lessons, teachers use their TSPK to create lesson plans. The implementation of these lesson plans and teachers' reflection upon them lead to their improvement. Gradually, successful lesson plans are mentally stored as teaching scripts and can easily be retrieved during instruction. This process is affected by teacher's beliefs, motivation and self-regulation. In order to examine the influence of TSPK on teaching scripts as well as the moderating effects of beliefs, motivation and self-regulation, we conducted a cross-sectional study with n = 49 in-service teachers in physics. The TSPK, beliefs, motivation, self-regulation and the quality of teaching scripts of in-service teachers were assessed by using an online questionnaire adapted to teaching the force concept and Newton's law for 9th grade instruction. Based on the measurement of the quality of teaching scripts, the results provide evidence that TSPK influences the quality of teaching scripts. Motivation and self-regulation moderate this influence.
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Report of the Defense Science Board Task Force on Military Software
1987-09-01
training commitment from others. (The same thing is true of processor architectures.) 3. DoD should be aggressively looking for opportunities to buy...resource or training commitment from others. (The same thing is true of processor architectures.) 3. DoD should be aggressively looking for opportunities to...are uuifying principles to be found, whether in quarks or in unified field theorie.. Einstein repeatedly argued that there must eventually be
Turan, Bulent
2016-01-01
People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health.
Turan, Bulent
2016-01-01
People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health. PMID:26973562
Neural substrates of embodied natural beauty and social endowed beauty: An fMRI study.
Zhang, Wei; He, Xianyou; Lai, Siyan; Wan, Juan; Lai, Shuxian; Zhao, Xueru; Li, Darong
2017-08-02
What are the neural mechanisms underlying beauty based on objective parameters and beauty based on subjective social construction? This study scanned participants with fMRI while they performed aesthetic judgments on concrete pictographs and abstract oracle bone scripts. Behavioral results showed both pictographs and oracle bone scripts were judged to be more beautiful when they referred to beautiful objects and positive social meanings, respectively. Imaging results revealed regions associated with perceptual, cognitive, emotional and reward processing were commonly activated both in beautiful judgments of pictographs and oracle bone scripts. Moreover, stronger activations of orbitofrontal cortex (OFC) and motor-related areas were found in beautiful judgments of pictographs, whereas beautiful judgments of oracle bone scripts were associated with putamen activity, implying stronger aesthetic experience and embodied approaching for beauty were elicited by the pictographs. In contrast, only visual processing areas were activated in the judgments of ugly pictographs and negative oracle bone scripts. Results provide evidence that the sense of beauty is triggered by two processes: one based on the objective parameters of stimuli (embodied natural beauty) and the other based on the subjective social construction (social endowed beauty).
Primary relationship scripts among lower-income, African American young adults.
Eyre, Stephen L; Flythe, Michelle; Hoffman, Valerie; Fraser, Ashley E
2012-06-01
Research on romantic relationships among lower income, African American young adults has mostly focused on problem behaviors, and has infrequently documented nonpathological relationship processes that are widely studied among middle-class college students, their wealthier and largely European American counterparts [Journal of Black Studies 39 (2009) 570]. To identify nonpathological cultural concepts related to heterosexual romantic relationships, we interviewed 144 low to low-mid income, African American young adults aged 19-22 from the San Francisco Bay Area, CA, metropolitan Chicago, IL, and Greater Birmingham, AL. We identified 12 gender-shared scripts related to the romantic relationship in areas of (1) defining the relationship, (2) processes of joining, (3) maintaining balance, and (4) modulating conflict. Understanding romantic relationship scripts is important as successful romantic relationships are associated with improved mental and physical health among lower income individuals as compared with individuals without romantic partners [Social Science & Medicine 52 (2001) 1501]. © FPI, Inc.
Novel technology for treating individuals with aphasia and concomitant cognitive deficits.
Cherney, Leora R; Halper, Anita S
2008-01-01
This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-theart computer software for training conversational scripts. Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of scriptrelated words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits.
The computational structural mechanics testbed generic structural-element processor manual
NASA Technical Reports Server (NTRS)
Stanley, Gary M.; Nour-Omid, Shahram
1990-01-01
The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).
ERIC Educational Resources Information Center
Groskreutz, Mark P.; Peters, Amy; Groskreutz, Nicole C.; Higbee, Thomas S.
2015-01-01
Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel…
The Use of Interactive Whiteboards in Teaching Non-Roman Scripts
ERIC Educational Resources Information Center
Tozcu, Anjel
2008-01-01
This study explores the use of the interactive whiteboards in teaching the non-Latin based orthographies of Hindi, Pashto, Dari, Persian (Farsi), and Hebrew. All these languages use non-roman scripts, and except for Hindi, they are cursive. Thus, letters within words are connected and for beginners the script may look quite complicated,…
A video depicting resuscitation did not impact upon patients' decision-making.
Richardson-Royer, Caitlin; Naqvi, Imran; Riffel, Christopher; Harvey, Lawrence; Smith, Domonique; Ayalew, Dagmawe; Motayar, Nasim; Amoateng-Adjepong, Yaw; Manthous, Constantine A
2018-01-01
Previous studies have demonstrated that video of and scripted information about cardiopulmonary resuscitation (CPR) can be deployed during clinician-patient end-of-life discussions. Few studies, however, examine whether video adds to verbal information-sharing. We hypothesized that video augments script-only decision-making. Patients aged >65 years admitted to hospital wards were randomized to receive evidence-based information ("script") vs. script plus video of simulated CPR and intubation. Patients' decisions registered in the hospital record, by hospital discharge were compared for the two groups. Fifty script-only intervention patients averaging 77.7 years were compared to 50 script+video patients with a mean age of 74.7 years. Eleven of 50 (22%) in each group declined CPR; and an additional three (script) vs. four (script+video) refused intubation for respiratory failure. There were no differences in sex, self-reported health trajectory, functional limitations, length of stay, or mortality associated with decisions. The rate at which verbally informed hospitalized elders opted out of resuscitation was not impacted by adding a video depiction of CPR.
Hussen, Sophia A; Bowleg, Lisa; Sangaramoorthy, Thurka; Malebranche, David J
2012-01-01
Black men in the USA experience disproportionately high rates of HIV infection, particularly in the Southeastern part of the country. We conducted 90 qualitative in-depth interviews with Black men living in the state of Georgia and analysed the transcripts using Sexual Script Theory to: (1) characterise the sources and content of sexual scripts that Black men were exposed to during their childhood and adolescence and (2) describe the potential influence of formative scripts on adult HIV sexual risk behaviour. Our analyses highlighted salient sources of cultural scenarios (parents, peers, pornography, sexual education and television), interpersonal scripts (early sex- play, older female partners, experiences of child abuse) and intrapsychic scripts that participants described. Stratification of participant responses based on sexual-risk behaviour revealed that lower- and higher-risk men described exposure to similar scripts during their formative years; however, lower-risk men reported an ability to cognitively process and challenge the validity of risk-promoting scripts that they encountered. Implications for future research are discussed.
Translating research findings into community based theatre: More than a dead man's wife.
Feldman, Susan; Hopgood, Alan; Dickins, Marissa
2013-12-01
Increasingly, qualitative scholars in health and social sciences are turning to innovative strategies as a way of translating research findings into informative, accessible and enjoyable forms for the community. The aim of this article is to describe how the research findings of a doctoral thesis - a narrative study about 58 older women's experiences of widowhood - were translated into a unique and professionally developed script to form the basis for a successful theatrical production that has travelled extensively within Australia. This article reports on the process of collaboration between a researcher, a highly regarded Australian actor/script writer and an ensemble of well-known and experienced professional actors. Together the collaborating partners translated the research data and findings about growing older and 'widowhood' into a high quality theatre production. In particular, we argue in this paper that research-based theatre is an appropriate medium for communicating research findings about important life issues of concern to older people in a safe, affirming and entertaining manner. By outlining the process of translating research findings into theatre we hope to show that there is a real value in this translation approach for both researcher and audience alike. © 2013.
Status of the JWST Science Instrument Payload
NASA Technical Reports Server (NTRS)
Greenhouse, Matt
2016-01-01
The James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) system consists of five sensors (4 science): Mid-Infrared Instrument (MIRI), Near Infrared Imager and Slitless Spectrograph (NIRISS), Fine Guidance Sensor (FGS), Near InfraRed Camera (NIRCam), Near InfraRed Spectrograph (NIRSpec); and nine instrument support systems: Optical metering structure system, Electrical Harness System; Harness Radiator System, ISIM Electronics Compartment, ISIM Remote Services Unit, Cryogenic Thermal Control System, Command and Data Handling System, Flight Software System, Operations Scripts System.
Lifeomics leads the age of grand discoveries.
He, Fuchu
2013-03-01
When our knowledge of a field accumulates to a certain level, we are bound to see the rise of one or more great scientists. They will make a series of grand discoveries/breakthroughs and push the discipline into an 'age of grand discoveries'. Mathematics, geography, physics and chemistry have all experienced their ages of grand discoveries; and in life sciences, the age of grand discoveries has appeared countless times since the 16th century. Thanks to the ever-changing development of molecular biology over the past 50 years, contemporary life science is once again approaching its breaking point and the trigger for this is most likely to be 'lifeomics'. At the end of the 20th century, genomics wrote out the 'script of life'; proteomics decoded the script; and RNAomics, glycomics and metabolomics came into bloom. These 'omics', with their unique epistemology and methodology, quickly became the thrust of life sciences, pushing the discipline to new high. Lifeomics, which encompasses all omics, has taken shape and is now signalling the dawn of a new era, the age of grand discoveries.
Uses of DARPA Materials Sciences Technology in DoD Systems.
1996-05-01
and Lasers NUMBER: University of Central Florida 4000 Central Florida Blvd. P.O. Box 162700 Orlando, FL 32816-2700 9. S PONSO RIN GMO NITO RING AGENCY...course of the program. These advances were communicated to the industry through seminars and workshops, individual plant and agency visits, videotapes on...1995) • P3 ISAR Radar Processor * Digital Signal Processor for OH-58D helicopter * Motorola building a GaAs IC plant for IRIDIUM 26 GALLIUM ARSENIDE
Multicore: Fallout from a Computing Evolution
Yelick, Kathy [Director, NERSC
2017-12-09
July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
CMSC-130 Introductory Computer Science, Lecture Notes
1993-07-01
Introductory Computer Science lecture notes are used in the classroom for teaching CMSC 130, an introductory computer science course , using the ...Unit Testing 2. The Syntax Of Subunits Will Be Studied In The Subsequent Course CMSC130 -5- Lecture 11 TOP-DOWN TESTING Data Processor Procedure...used in the preparation of these lecture notes: Reference Manual For The Ada Prosramming Language, ANSI/MIL-STD
A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.
2015-12-01
Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.
Running Gaussian16 Software Jobs on the Peregrine System | High-Performance
, parallel setup is taken care of automatically based on settings in the PBS script example below. Previous filesystem called /dev/shm. This scratch space is set automatically by the example script below. The Gaussian system. An example script for batch submission is given below. #!/bin/bash #PBS -l nodes=2 #PBS -l
ERIC Educational Resources Information Center
Turchik, Jessica A.; Probst, Danielle R.; Irvin, Clinton R.; Chau, Minna; Gidycz, Christine A.
2009-01-01
Although script theory has been applied to sexual assault (e.g., H. Frith & C. Kitzinger, 2001; A. S. Kahn, V. A. Andreoli Mathie, & C. Torgler, 1994), women's scripts of rape have not been examined in relation to predicting sexual victimization experiences. The purpose of the current study was to examine how elements of women's sexual assault…
Cultural scripts for a good death in Japan and the United States: similarities and differences.
Long, Susan Orpett
2004-03-01
Japan and the United States are both post-industrial societies, characterised by distinct trajectories of dying. Both contain multiple "cultural scripts" of the good death. Seale (Constructing Death: the Sociology of Dying and Bereavement, Cambridge University Press, Cambridge, 1998) has identified at least four "cultural scripts", or ways to die well, that are found in contemporary anglophone countries: modern medicine, revivalism, an anti-revivalist script and a religious script. Although these scripts can also be found in Japan, different historical experiences and religious traditions provide a context in which their content and interpretation sometimes differ from those of the anglophone countries. To understand ordinary people's ideas about dying well and dying poorly, we must recognise not only that post-industrial society offers multiple scripts and varying interpretive frameworks, but also that people actively select from among them in making decisions and explaining their views. Moreover, ideas and metaphors may be based on multiple scripts simultaneously or may offer different interpretations for different social contexts. Based on ethnographic fieldwork in both countries, this paper explores the metaphors that ordinary patients and caregivers draw upon as they use, modify, combine or ignore these cultural scripts of dying. Ideas about choice, time, place and personhood, elements of a good death that were derived inductively from interviews, are described. These Japanese and American data suggest somewhat different concerns and assumptions about human life and the relation of the person to the wider social world, but indicate similar concerns about the process of medicalised dying and the creation of meaning for those involved. While cultural differences do exist, they cannot be explained by reference to 'an American' and 'a Japanese' way to die. Rather, the process of creating and maintaining cultural scripts requires the active participation of ordinary people as they in turn respond to the constraints of post-industrial technology, institutions, demographics and notions of self.
Attachment to Mother and Father at Transition to Middle Childhood.
Di Folco, Simona; Messina, Serena; Zavattini, Giulio Cesare; Psouni, Elia
2017-01-01
The present study investigated concordance between representations of attachment to mother and attachment to father, and convergence between two narrative-based methods addressing these representations in middle childhood: the Manchester Child Attachment Story Task (MCAST) and the Secure Base Script Test (SBST). One hundred and twenty 6-year-old children were assessed by separate administrations of the MCAST for mother and father, respectively, and results showed concordance of representations of attachment to mother and attachment to father at age 6.5 years. 75 children were additionally tested about 12 months later, with the SBST, which assesses scripted knowledge of secure base (and safe haven), not differentiating between mother and father attachment relationships. Concerning attachment to father, dichotomous classifications (MCAST) and a continuous dimension capturing scripted secure base knowledge (MCAST) converged with secure base scriptedness (SBST), yet we could not show the same pattern of convergence concerning attachment to mother. Results suggest some convergence between the two narrative methods of assessment of secure base script but also highlight complications when using the MCAST for measuring attachment to father in middle childhood.
75 FR 52507 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
... standards designed to ensure that all catch delivered to the processor is accurately weighed and accounted... NMFS for catcher/processors and motherships is based on the vessel meeting a series of design criteria. Because of the wide variations in factory layout for inshore processors, NMFS requires a performance-based...
Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture
Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo
2010-01-01
The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Sparks, Lauren A; Trentacosta, Christopher J; Owusu, Erika; McLear, Caitlin; Smith-Darden, Joanne
2018-08-01
Secure attachment relationships have been linked to social competence in at-risk children. In the current study, we examined the role of parent secure base scripts in predicting at-risk kindergarteners' social competence. Parent representations of secure attachment were hypothesized to mediate the relationship between lower family cumulative risk and children's social competence. Participants included 106 kindergarteners and their primary caregivers recruited from three urban charter schools serving low-income families as a part of a longitudinal study. Lower levels of cumulative risk predicted greater secure attachment representations in parents, and scores on the secure base script assessment predicted children's social competence. An indirect relationship between lower cumulative risk and kindergarteners' social competence via parent secure base script scores was also supported. Parent script-based representations of the attachment relationship appear to be an important link between lower levels of cumulative risk and low-income kindergarteners' social competence. Implications of these findings for future interventions are discussed.
Groh, Ashley M; Roisman, Glenn I; Haydon, Katherine C; Bost, Kelly; McElwain, Nancy; Garcia, Leanna; Hester, Colleen
2015-11-01
This study examined the extent to which secure base script knowledge-reflected in the ability to generate narratives in which attachment-relevant events are encountered, a clear need for assistance is communicated, competent help is provided and accepted, and the problem is resolved-is associated with mothers' electrophysiological, subjective, and observed emotional responses to an infant distress vocalization. While listening to an infant crying, mothers (N = 108, M age = 34 years) lower on secure base script knowledge exhibited smaller shifts in relative left (vs. right) frontal EEG activation from rest, reported smaller reductions in feelings of positive emotion from rest, and expressed greater levels of tension. Findings indicate that lower levels of secure base script knowledge are associated with an organization of emotional responding indicative of a less flexible and more emotionally restricted response to infant distress. Discussion focuses on the contribution of mothers' attachment representations to their ability to effectively manage emotional responding to infant distress in a manner expected to support sensitive caregiving.
Design for a Manufacturing Method for Memristor-Based Neuromorphic Computing Processors
2013-03-01
DESIGN FOR A MANUFACTURING METHOD FOR MEMRISTOR- BASED NEUROMORPHIC COMPUTING PROCESSORS UNIVERSITY OF PITTSBURGH MARCH 2013...BASED NEUROMORPHIC COMPUTING PROCESSORS 5a. CONTRACT NUMBER FA8750-11-1-0271 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S...synapses and implemented a neuromorphic computing system based on our proposed synapse designs. The robustness of our system is also evaluated by
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Benavides, Jose; Provencher, Chris; Bualat, Maria; Smith, Marion F.; Mora Vargas, Andres
2017-01-01
At the end of 2017, Astrobee will launch three free-flying robots that will navigate the entire US segment of the ISS (International Space Station) and serve as a payload facility. These robots will provide guest science payloads with processor resources, space within the robot for physical attachment, power, communication, propulsion, and human interfaces.
ERIC Educational Resources Information Center
Hyde, Hartley; Spencer, Toby
2010-01-01
Some people became mathematics or science teachers by default. There was once such a limited range of subjects that students who could not write essays did mathematics and science. Computers changed that. Word processor software helped some people overcome huge spelling and grammar hurdles and made it easy to edit and manipulate text. Would-be…
7 CFR 3411.1 - Applicability of regulations.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., Conservation, and Trade Act of 1990 (FACT Act), (7 U.S.C. 450i(b)), for the support of research to further the..., and environmental sciences in the following categories: Single investigators or coinvestigators in the... National Research Council of the National Academy of Sciences; producers, processors, industry; the land...
NASA Technical Reports Server (NTRS)
Davies, A. G.; Chien, S.; Baker, V.; Castano, R.; Cichy, B.; Doggett, T.; Dohm, J. M.; Greeley, R.; Ip, F.; Rabideau, G.
2005-01-01
ASE has successfully demonstrated that a spacecraft can be driven by science analysis and autonomously controlled. ASE is available for flight on other missions. Mission hardware design should consider ASE requirements for available onboard data storage, onboard memory size and processor speed.
Kariuki, C M; van Arendonk, J A M; Kahi, A K; Komen, H
2017-06-01
Dairy cattle industries contribute to food and nutrition security and are a source of income for numerous households in many developing countries. Selective breeding can enhance efficiency in these industries. Developing dairy industries are characterized by diverse production and marketing systems. In this paper, we use weighted goal aggregating procedure to derive consensus trait preferences for different producer categories and processors. We based the study on the dairy industry in Kenya. The analytic hierarchy process was used to derive individual preferences for milk yield (MY), calving interval (CIN), production lifetime (PLT), mature body weight (MBW), and fat yield (FY). Results show that classical classification of production systems into large-scale and smallholder systems does not capture all differences in trait preferences. These differences became apparent when classification was based on productivity at the individual animal level, with high and low intensity producers and processors as the most important groups. High intensity producers had highest preferences for PLT and MY, whereas low intensity producers had highest preference for CIN and PLT; processors preferred MY and FY the most. The highest disagreements between the groups were observed for FY, PLT, and MY. Individual and group preferences were aggregated into consensus preferences using weighted goal programming. Desired gains were obtained as a product of consensus preferences and percentage genetic gains (G%). These were 2.42, 0.22, 2.51, 0.15, and 0.87 for MY, CIN, PLT, MBW, and FY, respectively. Consensus preferences can be used to derive a single compromise breeding objective for situations where the same genetic resources are used in diverse production and marketing circumstances. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
A data base processor semantics specification package
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1983-01-01
A Semantics Specification Package (DBPSSP) for the Intel Data Base Processor (DBP) is defined. DBPSSP serves as a collection of cross assembly tools that allow the analyst to assemble request blocks on the host computer for passage to the DBP. The assembly tools discussed in this report may be effectively used in conjunction with a DBP compatible data communications protocol to form a query processor, precompiler, or file management system for the database processor. The source modules representing the components of DBPSSP are fully commented and included.
Universal Plug-n-Play Sensor Integration for Advanced Navigation
2012-03-22
Orientation (top) and Angular Velocity (bottom) . . . . . . . . . 79 IV.6 Execution of AHRS script with roscore running on separate machine . . . . . . 80...single host case only with two hosts in this scenario. The script is running 78 Figure IV.5: Plot of AHRS Orientation (top) and Angular Velocity (bottom...Component-Based System using ROS . . . . . . . . . 59 3.6 Autonomous Behavior Using Scripting . . . . . . . . . . . . . . . . . . . . 60 3.6.1 udev
James D. Haywood; Finis Harris
2002-01-01
This presentation on prescribed burning is a cooperative effort of the USDA Forest Service, Southern Research Station and Kisatchie National Forest; Louisiana State University Agricultural Center; and the Joint Fire Science Program. The CD includes three methods of delivery: slides, Power Point presentation, and script only.
First Science Verification of the VLA Sky Survey Pilot
NASA Astrophysics Data System (ADS)
Cavanaugh, Amy
2017-01-01
My research involved analyzing test images by Steve Myers for the upcoming VLA Sky Survey. This survey will cover the entire sky visible from the VLA site in S band (2-4 GHz). The VLA will be in B configuration for the survey, as it was when the test images were produced, meaning a resolution of approximately 2.5 arcseconds. Conducted using On-the-Fly mode, the survey will have a speed of approximately 20 deg2 hr-1 (including overhead). New Python imaging scripts are being developed and improved to process the VLASS images. My research consisted of comparing a continuum test image over S band (from the new imaging scripts) to two previous images of the same region of the sky (from the CNSS and FIRST surveys), as well as comparing the continuum image to single spectral windows (from the new imaging scripts and of the same sky region). By comparing our continuum test image to images from CNSS and FIRST, we tested on-the-Fly mode and the imaging script used to produce our images. Another goal was to test whether individual spectral windows could be used in combination to calculate spectral indices close to those produced over S band (based only on our continuum image). Our continuum image contained 64 sources as opposed to the 99 sources found in the CNSS image. The CNSS image also had lower noise level (0.095 mJy/beam compared to 0.119 mJy/beam). Additionally, when our continuum image was compared to the CNSS image, separation showed no dependence on total flux density (in our continuum image). At lower flux densities, sources in our image were brighter than the same ones in the CNSS image. When our continuum image was compared to the FIRST catalog, the spectral index difference showed no dependence on total flux (in our continuum image). In conclusion, the quality of our images did not completely match the quality of the CNSS and FIRST images. More work is needed in developing the new imaging scripts.
Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine
2008-01-01
Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.
Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)
Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
2018-05-07
Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A
2016-01-01
The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Secure communication based on spatiotemporal chaos
NASA Astrophysics Data System (ADS)
Ren, Hai-Peng; Bai, Chao
2015-08-01
In this paper, we propose a novel approach to secure communication based on spatiotemporal chaos. At the transmitter end, the state variables of the coupled map lattice system are divided into two groups: one is used as the key to encrypt the plaintext in the N-shift encryption function, and the other is used to mix with the output of the N-shift function to further confuse the information to transmit. At the receiver end, the receiver lattices are driven by the received signal to synchronize with the transmitter lattices and an inverse procedure of the encoding is conducted to decode the information. Numerical simulation and experiment based on the TI TMS320C6713 Digital Signal Processor (DSP) show the feasibility and the validity of the proposed scheme. Project supported by the National Natural Science Foundation of China (Grant No. 61172070) and the Funds from the Science and Technology Innovation Team of Shaanxi Province, China (Grant No. 2013CKT-04).
Landgraf, Steffen; von Treskow, Isabella
2017-01-01
Hardly any subjects enjoy greater – public or private – interest than the art of flirtation and seduction. However, interpersonal approach behavior not only paves the way for sexual interaction and reproduction, but it simultaneously integrates non-sexual psychobiological and cultural standards regarding consensus and social norms. In the present paper, we use script theory, a concept that extends across psychological and cultural science, to assess behavioral options during interpersonal approaches. Specifically, we argue that approaches follow scripted event sequences that entail ambivalence as an essential communicative element. On the one hand, ambivalence may facilitate interpersonal approaches by maintaining and provoking situational uncertainty, so that the outcome of an action – even after several approaches and dates – remains ambiguous. On the other hand, ambivalence may increase the risk for sexual aggression or abuse, depending on the individual’s abilities, the circumstances, and the intentions of the interacting partners. Recognizing latent sequences of sexually aggressive behavior, in terms of their rigid structure and behavioral options, may thus enable individuals to use resources efficiently, avoid danger, and extricate themselves from assault situations. We conclude that interdisciplinary script knowledge about ambivalence as a core component of the seduction script may be helpful for counteracting subtly aggressive intentions and preventing sexual abuse. We discuss this with regard to the nature-nurture debate as well as phylogenetic and ontogenetic aspects of interpersonal approach behavior and its medial implementation. PMID:28119656
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suresh, Niraj; Stephens, Sean A.; Adams, Lexor
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and forest management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving the plant. X ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. Our group at the Environmental Molecular Sciences Laboratory (EMSL) has developed an XCT-based tool to image and quantitatively analyze plant root structures in their native soil environment. XCT data collected on amore » Prairie dropseed (Sporobolus heterolepis) specimen was used to visualize its root structure. A combination of open-source software RooTrak and DDV were employed to segment the root from the soil, and calculate its isosurface, respectively. Our own computer script named 3DRoot-SV was developed and used to calculate root volume and surface area from a triangular mesh. The process utilizing a unique combination of tools, from imaging to quantitative root analysis, including the 3DRoot-SV computer script, is described.« less
High-Speed On-Board Data Processing for Science Instruments: HOPS
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
Automatic film processors' quality control test in Greek military hospitals.
Lymberis, C; Efstathopoulos, E P; Manetou, A; Poudridis, G
1993-04-01
The two major military radiology installations (Athens, Greece) using a total of 15 automatic film processors were assessed using the 21-step-wedge method. The results of quality control in all these processors are presented. The parameters measured under actual working conditions were base and fog, contrast and speed. Base and fog as well as speed displayed large variations with average values generally higher than acceptable, whilst contrast displayed greater stability. Developer temperature was measured daily during the test and was found to be outside the film manufacturers' recommended limits in nine of the 15 processors. In only one processor did film passing time vary on an every day basis and this was due to maloperation. Developer pH test was not part of the daily monitoring service being performed every 5 days for each film processor and found to be in the range 9-12; 10 of the 15 processors presented pH values outside the limits specified by the film manufacturers.
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1983-01-01
A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.
Simulation for Dynamic Situation Awareness and Prediction III
2010-03-01
source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes
Software-Reconfigurable Processors for Spacecraft
NASA Technical Reports Server (NTRS)
Farrington, Allen; Gray, Andrew; Bell, Bryan; Stanton, Valerie; Chong, Yong; Peters, Kenneth; Lee, Clement; Srinivasan, Jeffrey
2005-01-01
A report presents an overview of an architecture for a software-reconfigurable network data processor for a spacecraft engaged in scientific exploration. When executed on suitable electronic hardware, the software performs the functions of a physical layer (in effect, acts as a software radio in that it performs modulation, demodulation, pulse-shaping, error correction, coding, and decoding), a data-link layer, a network layer, a transport layer, and application-layer processing of scientific data. The software-reconfigurable network processor is undergoing development to enable rapid prototyping and rapid implementation of communication, navigation, and scientific signal-processing functions; to provide a long-lived communication infrastructure; and to provide greatly improved scientific-instrumentation and scientific-data-processing functions by enabling science-driven in-flight reconfiguration of computing resources devoted to these functions. This development is an extension of terrestrial radio and network developments (e.g., in the cellular-telephone industry) implemented in software running on such hardware as field-programmable gate arrays, digital signal processors, traditional digital circuits, and mixed-signal application-specific integrated circuits (ASICs).
Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction
NASA Astrophysics Data System (ADS)
Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.
2003-01-01
In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-05
... Science Center's Social Sciences Branch seeks to collect data on distribution networks and business... also seeks to collect data on business disruptions due to Hurricane Sandy for those firms. The data collected will improve research and analysis on the economic impacts of potential fishery management actions...
Illness script development in pre-clinical education through case-based clinical reasoning training
Keemink, Yvette; van Dijk, Savannah; ten Cate, Olle
2018-01-01
Objectives To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. Methods In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students’ illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Results Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=–0.006, p=NS). Conclusions The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability. PMID:29428911
Illness script development in pre-clinical education through case-based clinical reasoning training.
Keemink, Yvette; Custers, Eugene J F M; van Dijk, Savannah; Ten Cate, Olle
2018-02-09
To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students' illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=-0.006, p=NS). The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability.
Onboard Radar Processing Development for Rapid Response Applications
NASA Technical Reports Server (NTRS)
Lou, Yunling; Chien, Steve; Clark, Duane; Doubleday, Josh; Muellerschoen, Ron; Wang, Charles C.
2011-01-01
We are developing onboard processor (OBP) technology to streamline data acquisition on-demand and explore the potential of the L-band SAR instrument onboard the proposed DESDynI mission and UAVSAR for rapid response applications. The technology would enable the observation and use of surface change data over rapidly evolving natural hazards, both as an aid to scientific understanding and to provide timely data to agencies responsible for the management and mitigation of natural disasters. We are adapting complex science algorithms for surface water extent to detect flooding, snow/water/ice classification to assist in transportation/ shipping forecasts, and repeat-pass change detection to detect disturbances. We are near completion of the development of a custom FPGA board to meet the specific memory and processing needs of L-band SAR processor algorithms and high speed interfaces to reformat and route raw radar data to/from the FPGA processor board. We have also developed a high fidelity Matlab model of the SAR processor that is modularized and parameterized for ease to prototype various SAR processor algorithms targeted for the FPGA. We will be testing the OBP and rapid response algorithms with UAVSAR data to determine the fidelity of the products.
ERIC Educational Resources Information Center
Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.
2014-01-01
Based on a subsample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this article reports data from a follow-up assessment at age 18 years on the antecedents of "secure base script knowledge", as reflected in the ability to generate narratives in which attachment-related difficulties are…
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
Towards a Generic and Adaptive System-On-Chip Controller for Space Exploration Instrumentation
NASA Technical Reports Server (NTRS)
Iturbe, Xabier; Keymeulen, Didier; Yiu, Patrick; Berisford, Dan; Hand, Kevin; Carlson, Robert; Ozer, Emre
2015-01-01
This paper introduces one of the first efforts conducted at NASA’s Jet Propulsion Laboratory (JPL) to develop a generic System-on-Chip (SoC) platform to control science instruments that are proposed for future NASA missions. The SoC platform is named APEX-SoC, where APEX stands for Advanced Processor for space Exploration, and is based on a hybrid Xilinx Zynq that combines an FPGA and an ARM Cortex-A9 dual-core processor on a single chip. The Zynq implements a generic and customizable on-chip infrastructure that can be reused with a variety of instruments, and it has been coupled with a set of off-chip components that are necessary to deal with the different instruments. We have taken JPL’s Compositional InfraRed Imaging Spectrometer (CIRIS), which is proposed for NASA icy moons missions, as a use-case scenario to demonstrate that the entire data processing, control and interface of an instrument can be implemented on a single device using the on-chip infrastructure described in this paper. We show that the performance results achieved in this preliminary version of the instrumentation controller are sufficient to fulfill the science requirements demanded to the CIRIS instrument in future NASA missions, such as Europa.
Architectures for reasoning in parallel
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.
1989-01-01
The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.
Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing
NASA Astrophysics Data System (ADS)
Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.
2011-12-01
Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia
2006-01-01
The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.
dREL: a relational expression language for dictionary methods.
Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R
2012-08-27
The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.
Robot Task Commander with Extensible Programming Environment
NASA Technical Reports Server (NTRS)
Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)
2014-01-01
A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.
SHARPEN-systematic hierarchical algorithms for rotamers and proteins on an extended network.
Loksha, Ilya V; Maiolo, James R; Hong, Cheng W; Ng, Albert; Snow, Christopher D
2009-04-30
Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. (c) 2009 Wiley Periodicals, Inc.
OpenSesame: an open-source, graphical experiment builder for the social sciences.
Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan
2012-06-01
In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.
A case study: The original intentions of the designers of the science content standards
NASA Astrophysics Data System (ADS)
Eucker, Penelope Hudson
This case study research examined the original intentions of the designers of the science content standards in the historical context of educational reforms and legislation. The content standards are the keystone of standards-based education. Originally, national science content standards were part of a cohesive program to increase the occurrence of quality science K--12. Through assessment policies set into motion by state and federal legislation, science curriculum is increasingly fixed and standardized. Scripting teachers is becoming more common. Unintended outcomes of standards-based education are prevalent in all classrooms. Recording the original intentions of the designers of the science content standards in a historical context is significant to document their beliefs and purposes. The shared beliefs of the six scholars included: (a) science had become overstuffed curriculum with students learning very few concepts; (b) science teachers required assistance to decide which concepts are most important for students to learn; (c) standards-based education will most likely endure for a very long time; (d) science is a specific way of knowing and inquiry must be part of science instruction; (e) few teachers teach to the science content standards. The scholars disagreed about whether the power to decide what to teach had moved from the classroom to the legislators and if standards-based education has preferentially helped some groups of students while diminishing the science education of others. Implications from the findings reveal the tension between a defined science content and the resultant assessment template that further trims the instructional range offered. Foreshadowing of increasing trend toward profits made from testing companies as state and federal legislation increase mandated assessments. Significantly, the educational research that clearly demonstrate many pathways lead to educated students such as the Eight-Year Study were suppressed in favor of the bi-partisan supported standards-based education. One of the stated goals of standards-based education was equity. With documented corrupted curriculum sometimes devoid of all science, equity remains an elusive goal. This research documents the original intentions of the designers of the science content standards. The story continues to unfold with new state and federal legislation as teachers attempt to teach the mandated content standards.
Considerations for Future Climate Data Stewardship
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P. T.; Chapman, D. R.
2009-12-01
In this talk, we will describe the lessons learned based on processing and generating a decade of gridded AIRS and MODIS IR sounding data. We describe the challenges faced in accessing and sharing very large data sets, maintaining data provenance under evolving technologies, obtaining access to legacy calibration data and the permanent preservation of Earth science data records for on demand services. These lessons suggest a new approach to data stewardship will be required for the next decade of hyper spectral instruments combined with cloud resolving models. It will not be sufficient for stewards of future data centers to just provide the public with access to archived data but our experience indicates that data needs to reside close to computers with ultra large disc farms and tens of thousands of processors to deliver complex services on demand over very high speed networks much like the offerings of search engines today. Over the first decade of the 21st century, petabyte data records were acquired from the AIRS instrument on Aqua and the MODIS instrument on Aqua and Terra. NOAA data centers also maintain petabytes of operational IR sounders collected over the past four decades. The UMBC Multicore Computational Center (MC2) developed a Service Oriented Atmospheric Radiance gridding system (SOAR) to allow users to select IR sounding instruments from multiple archives and choose space-time- spectral periods of Level 1B data to download, grid, visualize and analyze on demand. Providing this service requires high data rate bandwidth access to the on line disks at Goddard. After 10 years, cost effective disk storage technology finally caught up with the MODIS data volume making it possible for Level 1B MODIS data to be available on line. However, 10Ge fiber optic networks to access large volumes of data are still not available from CSFC to serve the broader community. Data transfer rates are well below 10MB/s limiting their usefulness for climate studies. During this decade, processor performance hit a power wall leading computer vendors to design multicore processor chips. High performance computer systems obtained petaflop performance by clustering tens of thousands of multicore processor chips. Thus, power consumption and autonomic recovery from processor and disc failures have become major cost and technical considerations for future data archives. To address these new architecture requirements, a transparent parallel programming paradigm, the Hadoop MapReduce cloud computing system, became available as an open S/W system. In addition, the Hadoop File System and manages the distribution of data to these processors as well as backs up the processing in the event of any processor or disc failure. However, to employ this paradigm, the data needs to be stored on the computer system. We conclude this talk with a climate data preservation approach that addresses the scalability crisis to exabyte data requirements for the next decade based on projections of processor, disc data density and bandwidth doubling rates.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
Earth Orbiter 1 (EO-1): Wideband Advanced Recorder and Processor (WARP)
NASA Technical Reports Server (NTRS)
Smith, Terry; Kessler, John
1999-01-01
An overview of the Earth Orbitor 1 (EO1) Wideband Advanced Recorder and Processor (WARP) is presented in viewgraph form. The WARP is a spacecraft component that receives, stores, and processes high rate science data and its associated ancillary data from multispectral detectors, hyperspectral detectors, and an atmospheric corrector, and then transmits the data via an X-band or S-band transmitter to the ground station. The WARP project goals are: (1) Pathfinder for next generation LANDSAT mission; (2) Flight prove architectures and technologies; and (3) Identify future technology needs.
Computing NLTE Opacities -- Node Level Parallel Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Daniel
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
Effects of script-based role play in cardiopulmonary resuscitation team training.
Chung, Sung Phil; Cho, Junho; Park, Yoo Seok; Kang, Hyung Goo; Kim, Chan Woong; Song, Keun Jeong; Lim, Hoon; Cho, Gyu Chong
2011-08-01
The purpose of this study is to compare the cardiopulmonary resuscitation (CPR) team dynamics and performance between a conventional simulation training group and a script-based training group. This was a prospective randomised controlled trial of educational intervention for CPR team training. Fourteen teams, each consisting of five members, were recruited. The conventional group (C) received training using a didactic lecture and simulation with debriefing, while the script group (S) received training using a resuscitation script. The team activity was evaluated with checklists both before and after 1 week of training. The videotaped simulated resuscitation events were compared in terms of team dynamics and performance aspects. Both groups showed significantly higher leadership scores after training (C: 58.2 ± 9.2 vs. 67.2 ± 9.5, p=0.007; S: 57.9 ± 8.1 vs. 65.4 ± 12.1, p=0.034). However, there were no significant improvements in performance scores in either group after training. There were no differences in the score improvement after training between the two groups in dynamics (C: 9.1 ± 12.6 vs. S: 7.4 ± 13.7, p=0.715), performance (C: 5.5 ± 11.4 vs. S: 4.7 ± 9.6, p=0.838) and total scores (C: 14.6 ± 20.1 vs. S: 12.2 ± 19.5, p=0.726). Script-based CPR team training resulted in comparable improvements in team dynamics scores compared with conventional simulation training. Resuscitation scripts may be used as an adjunct for CPR team training.
ERIC Educational Resources Information Center
Groh, Ashley M.; Roisman, Glenn I.
2009-01-01
This article examines the extent to which secure base script knowledge--as reflected in an adult's ability to generate narratives in which attachment-related threats are recognized, competent help is provided, and the problem is resolved--is associated with adults' autonomic and subjective emotional responses to infant distress and nondistress…
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay
1987-01-01
The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.
History of Science and Science Museums
NASA Astrophysics Data System (ADS)
Faria, Cláudia; Guilherme, Elsa; Gaspar, Raquel; Boaventura, Diana
2015-10-01
The activities presented in this paper, which are addressed to elementary school, are focused on the pioneering work of the Portuguese King Carlos I in oceanography and involve the exploration of the exhibits belonging to two different science museums, the Aquarium Vasco da Gama and the Maritime Museum. Students were asked to study fish adaptations to deep sea, through the exploration of a fictional story, based on historical data and based on the work of the King that served as a guiding script for all the subsequent tasks. In both museums, students had access to: historical collections of organisms, oceanographic biological sampling instruments, fish gears and ships. They could also observe the characteristics and adaptations of diverse fish species characteristic of deep sea. The present study aimed to analyse the impact of these activities on students' scientific knowledge, on their understanding of the nature of science and on the development of transversal skills. All students considered the project very popular. The results obtained suggest that the activity promoted not only the understanding of scientific concepts, but also stimulated the development of knowledge about science itself and the construction of scientific knowledge, stressing the relevance of creating activities informed by the history of science. As a final remark we suggest that the partnership between elementary schools and museums should be seen as an educational project, in which the teacher has to assume a key mediating role between the school and the museums.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
NASA Astrophysics Data System (ADS)
Waldman, Amy Sue
I. Protein structure is not easily predicted from the linear sequence of amino acids. An increased ability to create protein structures would allow researchers to develop new peptide-based therapeutics and materials, and would provide insights into the mechanisms of protein folding. Toward this end, we have designed and synthesized two-stranded antiparallel beta-sheet mimics containing conformationally biased scaffolds and semicarbazide, urea, and hydrazide linker groups that attach peptide chains to the scaffold. The mimics exhibited populations of intramolecularly hydrogen-bonded beta-sheet-like conformers as determined by spectroscopic techniques such as FTIR, sp1H NMR, and ROESY studies. During our studies, we determined that a urea-hydrazide beta-strand mimic was able to tightly hydrogen bond to peptides in an antiparallel beta-sheet-like configuration. Several derivatives of the urea-hydrazide beta-strand mimic were synthesized. Preliminary data by electron microscopy indicate that the beta-strand mimics have an effect on the folding of Alzheimer's Abeta peptide. These data suggest that the urea-hydrazide beta-strand mimics and related compounds may be developed into therapeutics which effect the folding of the Abeta peptide into neurotoxic aggregates. II. In recent years, there has been concern about the low level of science literacy and science interest among Americans. A declining interest in science impacts the abilities of people to make informed decisions about technology. To increase the interest in science among secondary students, we have developed the UCI Chemistry Outreach Program to High Schools. The Program features demonstration shows and discussions about chemistry in everyday life. The development and use of show scripts has enabled large numbers of graduate and undergraduate student volunteers to demonstrate chemistry to more than 12,000 local high school students. Teachers, students, and volunteers have expressed their enjoyment of The UCI Chemistry Outreach Program to High Schools.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
The growth of the UniTree mass storage system at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1993-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.
Communicating Ocean Acidification
ERIC Educational Resources Information Center
Pope, Aaron; Selna, Elizabeth
2013-01-01
Participation in a study circle through the National Network of Ocean and Climate Change Interpretation (NNOCCI) project enabled staff at the California Academy of Sciences to effectively engage visitors on climate change and ocean acidification topics. Strategic framing tactics were used as staff revised the scripted Coral Reef Dive program,…
Agricultural Science Protects Our Environment.
ERIC Educational Resources Information Center
1967
Included are a 49 frame filmstrip and a script for narrating a presentation. The presentation is aimed at the secondary school level with an emphasis on how agricultural scientists investigate problems in farmland erosion, stream pollution, road building erosion problems, air pollution, farm pollution, pesticides, and insect control by biological…
Toward a Science of Cooperation.
ERIC Educational Resources Information Center
Newbern, Dianna; And Others
Scripted cooperative learning and individual learning of descriptive information were compared in a 2-x-2 factorial design with 104 undergraduates. Influenced by models of individual learning and cognition, differences were assessed in (1) information acquisition and retrieval, (2) the quality and quantity of recalled information, and (3) the…
A Client/Server Architecture for Supporting Science Data Using EPICS Version 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalesio, Leo
2015-04-21
The Phase 1 grant that serves as a precursor to this proposal, prototyped complex storage techniques for high speed structured data that is being produced in accelerator diagnostics and beam line experiments. It demonstrates the technologies that can be used to archive and retrieve complex data structures and provide the performance required by our new accelerators, instrumentations, and detectors. Phase 2 is proposed to develop a high-performance platform for data acquisition and analysis to provide physicists and operators a better understanding of the beam dynamics. This proposal includes developing a platform for reading 109 MHz data at 10 KHz ratesmore » through a multicore front end processor, archiving the data to an archive repository that is then indexed for fast retrieval. The data is then retrieved from this data archive, integrated with the scalar data, to provide data sets to client applications for analysis, use in feedback, and to aid in identifying problem with the instrumentation, plant, beam steering, or model. This development is built on EPICS version 4 , which is being successfully deployed to implement physics applications. Through prior SBIR grants, EPICS version 4 has a solid communication protocol for middle layer services (PVAccess), structured data representation and methods for efficient transportation and access (PVData), an operational hierarchical record environment (JAVA IOC), and prototypes for standard structured data (Normative Types). This work was further developed through project funding to successfully deploy the first service based physics application environment with demonstrated services that provide arbitrary object views, save sets, model, lattice, and unit conversion. Thin client physics applications have been developed in Python that implement quad centering, orbit display, bump control, and slow orbit feedback. This service based architecture has provided a very modular and robust environment that enables commissioning teams to rapidly develop and deploy small scripts that build on powerful services. These services are all built on relational database data stores and scalar data. The work proposed herein, builds on these previous successes to provide data acquisition of high speed data for online analysis clients.« less
Automatic Command Sequence Generation
NASA Technical Reports Server (NTRS)
Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat
2007-01-01
Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.
Systems and methods for process and user driven dynamic voltage and frequency scaling
Mallik, Arindam [Evanston, IL; Lin, Bin [Hillsboro, OR; Memik, Gokhan [Evanston, IL; Dinda, Peter [Evanston, IL; Dick, Robert [Evanston, IL
2011-03-22
Certain embodiments of the present invention provide a method for power management including determining at least one of an operating frequency and an operating voltage for a processor and configuring the processor based on the determined at least one of the operating frequency and the operating voltage. The operating frequency is determined based at least in part on direct user input. The operating voltage is determined based at least in part on an individual profile for processor.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
Design of 3D simulation engine for oilfield safety training
NASA Astrophysics Data System (ADS)
Li, Hua-Ming; Kang, Bao-Sheng
2015-03-01
Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.
ERIC Educational Resources Information Center
Umemura, Tomotaka; Watanabe, Manami; Tazuke, Kohei; Asada-Hirano, Shintaro; Kudo, Shimpei
2018-01-01
The universality of secure base construct, which suggests that one's use of an attachment figure as a secure base from which to explore the environment is an evolutionary outcome, is one of the core ideas of attachment theory. However, this universality idea has been critiqued because exploration is not as valued in Japanese culture as it is in…
NASA Astrophysics Data System (ADS)
Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.
1995-10-01
Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.
Digital Electronics for Nuclear Physics Experiments
NASA Astrophysics Data System (ADS)
Skulski, Wojtek; Hunter, David; Druszkiewicz, Eryk; Khaitan, Dev Ashish; Yin, Jun; Wolfs, Frank; SkuTek Instrumentation Team; Department of Physics; Astronomy, University of Rochester Team
2015-10-01
Future detectors in nuclear physics will use signal sampling as one of primary techniques of data acquisition. Using the digitized waveforms, the electronics can select events based on pulse shape, total energy, multiplicity, and the hit pattern. The DAQ for the LZ Dark Matter detector, now under development in Rochester, is a good example of the power of digital signal processing. This system, designed around 32-channel, FPGA-based, digital signal processors collects data from more than one thousand channels. The solutions developed for this DAQ can be applied to nuclear physics experiments. Supported by the Department of Energy Office of Science under Grant DE-SC0009543.
Database for LDV Signal Processor Performance Analysis
NASA Technical Reports Server (NTRS)
Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.
1989-01-01
A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.
The Use of a Microcomputer Based Array Processor for Real Time Laser Velocimeter Data Processing
NASA Technical Reports Server (NTRS)
Meyers, James F.
1990-01-01
The application of an array processor to laser velocimeter data processing is presented. The hardware is described along with the method of parallel programming required by the array processor. A portion of the data processing program is described in detail. The increase in computational speed of a microcomputer equipped with an array processor is illustrated by comparative testing with a minicomputer.
Consumer acceptance of irradiated food: theory and reality
NASA Astrophysics Data System (ADS)
Bruhn, Christine M.
1998-06-01
For years most consumers have expressed less concern about food irradiation than other food processing technologies. Attitude studies have demonstrated that when given science-based information, from 60% to 90% of consumers prefer the advantages irradiation processing provides. When information is accompanied by samples, acceptance may increase to 99%. Information on irradiation should include product benefits, safety and wholesomeness, address environmental safety issues, and include endorsements by recognized health authorities. Educational and marketing programs should now be directed toward retailers and processors. Given the opportunity, consumers will buy high quality, safety-enhanced irradiated food.
Architecture, Design, and Development of an HTML/JavaScript Web-Based Group Support System.
ERIC Educational Resources Information Center
Romano, Nicholas C., Jr.; Nunamaker, Jay F., Jr.; Briggs, Robert O.; Vogel, Douglas R.
1998-01-01
Examines the need for virtual workspaces and describes the architecture, design, and development of GroupSystems for the World Wide Web (GSWeb), an HTML/JavaScript Web-based Group Support System (GSS). GSWeb, an application interface similar to a Graphical User Interface (GUI), is currently used by teams around the world and relies on user…
Trumbell, Jill M; Hibel, Leah C; Mercado, Evelyn; Posada, Germán
2018-06-21
The current study examines associations between marital conflict and negative parenting behaviors among fathers and mothers, and the extent to which internal working models (IWMs) of attachment relationships may serve as sources of risk or resilience during family interactions. The sample consisted of 115 families (mothers, fathers, and their 6-month-old infants) who participated in a controlled experiment. Couples were randomly assigned to engage in either a conflict or positive marital discussion, followed by parent-infant freeplay sessions and assessment of parental IWMs of attachment (i.e., secure base script knowledge). While no differences in parenting behaviors emerged between the conflict and positive groups, findings revealed that couple withdrawal during the marital discussion was related to more intrusive and emotionally disengaged parenting for mothers and fathers. Interestingly, secure base script knowledge was inversely related to intrusion and emotional disengagement for fathers, but not for mothers. Furthermore, only among fathers did secure base script knowledge serve to significantly buffer the impact of marital disengagement on negative parenting (emotional disengagement). Findings are discussed using a family systems framework and expand our understanding of families, and family members, at risk. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Parallel processor-based raster graphics system architecture
Littlefield, Richard J.
1990-01-01
An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.
MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee
2008-01-01
High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less
[About healing with nature and about love for the patient].
Schipperges, H
1994-03-29
In the light of a nature-centered philosophy and its image of man and universe, the medical science of Paracelsus appears 'enlightened by nature'. Based on this experience, the physician found his medical art on the 'four pillars' ('Philosophia', 'Astronomia', 'Alchimia', 'Physica'). Acting out of the healing power of nature would be incomplete, if 'Physica' would not be accompanied by 'Virtus', the ethical component of all medical actions. This is described by Paracelsus by the picture of mercy ('Barmherzigkeit'), in whose sentiment all medical acting finds its final motivation. Therefore, the script concentrates in particular on what Paracelsus describes as 'love for the patient'.
Acquisition and Maintenance of Scripts in Aphasia: A Comparison of Two Cuing Conditions
Cherney, Leora R.; Kaye, Rosalind C.; van Vuuren, Sarel
2014-01-01
Purpose This study was designed to compare acquisition and maintenance of scripts under two conditions: High Cue which provided numerous multimodality cues designed to minimize errors, and Low Cue which provided minimal cues. Methods In a randomized controlled cross-over study, eight individuals with chronic aphasia received intensive computer-based script training under two cuing conditions. Each condition lasted three weeks, with a three-week washout period. Trained and untrained scripts were probed for accuracy and rate at baseline, during treatment, immediately post-treatment, and at three and six weeks post-treatment. Significance testing was conducted on gain scores and effect sizes were calculated. Results Training resulted in significant gains in script acquisition with maintenance of skills at three and six weeks post-treatment. Differences between cuing conditions were not significant. When severity of aphasia was considered, there also were no significant differences between conditions, although magnitude of change was greater in the High Cue condition versus the Low Cue condition for those with more severe aphasia. Conclusions Both cuing conditions were effective in acquisition and maintenance of scripts. The High Cue condition may be advantageous for those with more severe aphasia. Findings support the clinical use of script training and importance of considering aphasia severity. PMID:24686911
Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.
Bhattacharya, Ujjwal; Chaudhuri, B B
2009-03-01
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
An Idealized, Single Radial Swirler, Lean-Direct-Injection (LDI) Concept Meshing Script
NASA Technical Reports Server (NTRS)
Iannetti, Anthony C.; Thompson, Daniel
2008-01-01
To easily study combustor design parameters using computational fluid dynamics codes (CFD), a Gridgen Glyph-based macro (based on the Tcl scripting language) dubbed BladeMaker has been developed for the meshing of an idealized, single radial swirler, lean-direct-injection (LDI) combustor. BladeMaker is capable of taking in a number of parameters, such as blade width, blade tilt with respect to the perpendicular, swirler cup radius, and grid densities, and producing a three-dimensional meshed radial swirler with a can-annular (canned) combustor. This complex script produces a data format suitable for but not specific to the National Combustion Code (NCC), a state-of-the-art CFD code developed for reacting flow processes.
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
The effect of written text on comprehension of spoken English as a foreign language.
Diao, Yali; Chandler, Paul; Sweller, John
2007-01-01
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
Texture for script identification.
Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha
2005-11-01
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.
Chen, Cory K; Waters, Harriet Salatas; Hartman, Marilyn; Zimmerman, Sheryl; Miklowitz, David J; Waters, Everett
2013-01-01
This study explores links between adults' attachment representations and the task of caring for elderly parents with dementia. Participants were 87 adults serving as primary caregivers of a parent or parent-in-law with dementia. Waters and Waters' ( 2006 ) Attachment Script Assessment was adapted to assess script-like attachment representation in the context of caring for their elderly parent. The quality of adult-elderly parent interactions was assessed using the Level of Expressed Emotions Scale (Cole & Kazarian, 1988 ) and self-report measures of caregivers' perception of caregiving as difficult. Caregivers' secure base script knowledge predicted lower levels of negative expressed emotion. This effect was moderated by the extent to which participants experienced caring for elderly parents as difficult. Attachment representations played a greater role in caregiving when caregiving tasks were perceived as more difficult. These results support the hypothesis that attachment representations influence the quality of care that adults provide their elderly parents. Clinical implications are discussed.
galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.
Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M
2004-06-12
The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran.
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations' performance and is carried out in the form of teamwork called participatory research. Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research.
NASA Astrophysics Data System (ADS)
Schechinger, Linda Sue
I. To investigate the delivery of nucleotide-based drugs, we are studying molecular recognition of nucleotide derivatives in environments that are similar to cell membranes. The Nowick group previously discovered that membrane-like surfactant micelles tetradecyltrimethylammonium bromide (TTAB) micelle facilitate molecular of adenosine monophosphate (AMP) recognition. The micelles bind nucleotides by means of electrostatic interactions and hydrogen bonding. We observed binding by following 1H NMR chemical shift changes of unique hexylthymine protons upon addition of AMP. Cationic micelles are required for binding. In surfactant-free or sodium dodecylsulfate solutions, no hydrogen bonding is observed. These observations suggest that the cationic surfactant headgroups bind the nucleotide phosphate group, while the intramicellar base binds the nucleotide base. The micellar system was optimized to enhance binding and selectivity for adenosine nucleotides. The selectivity for adenosine and the number of phosphate groups attached to the adenosine were both investigated. Addition of cytidine, guanidine, or uridine monophosphates, results in no significant downfield shifting of the NH resonance. Selectivity for the phosphate is limited, since adenosine mono-, di-, and triphosphates all have similar binding constants. We successfully achieved molecular recognition of adenosine nucleotides in micellar environments. There is significant difference in the binding interactions between the adenosine nucleotides and three other natural nucleotides. II. The UCI Chemistry Outreach Program (UCICOP) addresses the declining interest of the nations youth for science. UCICOP brings fun and exciting chemistry experiments to local high schools, to remind students that science is fun and has many practical uses. Volunteer students and alumni of UCI perform the demonstrations using scripts and material provided by UCICOP. The preparation of scripts and materials is done by two coordinators. These coordinators organize the program and provide continuity to the program. The success of UCICOP can be measured by the high praise and gratitude expressed by the teachers, students and volunteers.
A fully reconfigurable photonic integrated signal processor
NASA Astrophysics Data System (ADS)
Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping
2016-03-01
Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Programme for Monitoring of the Greenland Ice Sheet - Ice Surface Velocities
NASA Astrophysics Data System (ADS)
Andersen, S. B.; Ahlstrom, A. P.; Boncori, J. M.; Dall, J.
2011-12-01
In 2007, the Danish Ministry of Climate and Energy launched the Programme for Monitoring of the Greenland Ice Sheet (PROMICE) as an ongoing effort to assess changes in the mass budget of the Greenland Ice Sheet. Iceberg calving from the outlet glaciers of the Greenland Ice Sheet, often termed the ice-dynamic mass loss, is responsible for an important part of the mass loss during the last decade. To quantify this part of the mass loss, we combine airborne surveys yielding ice-sheet thickness along the entire margin, with surface velocities derived from satellite synthetic-aperture radar (SAR). In order to derive ice sheet surface velocities from SAR a processing chain has been developed for GEUS by DTU Space based on a commercial software package distributed by GAMMA Remote Sensing. The processor, named SUSIE (Scripts and Utilities for SAR Ice-motion Estimation), can use both differential SAR interferometry and offset-tracking techniques to measure the horizontal velocity components, providing also an estimate of the corresponding measurement error. So far surface velocities have been derived for a number of sites including Nioghalvfjerdsfjord Glacier, the Kangerlussuaq region, the Nuuk region, Helheim Glacier and Daugaard-Jensen Glacier using data from ERS-1/ERS-2, ENVISAT ASAR and ALOS Palsar. Here we will present these first results.
NASA Astrophysics Data System (ADS)
Blok, A. S.; Bukhenskii, A. F.; Krupitskii, É. I.; Morozov, S. V.; Pelevin, V. Yu; Sergeenko, T. N.; Yakovlev, V. I.
1995-10-01
An investigation is reported of acousto-optical and fibre-optic Fourier processors of electric signals, based on semiconductor lasers. A description is given of practical acousto-optical processors with an analysis band 120 MHz wide, a resolution of 200 kHz, and 7 cm × 8 cm × 18 cm dimensions. Fibre-optic Fourier processors are considered: they represent a new class of devices which are promising for the processing of gigahertz signals.
Speer, Stefan; Klein, Andreas; Kober, Lukas; Weiss, Alexander; Yohannes, Indra; Bert, Christoph
2017-08-01
Intensity-modulated radiotherapy (IMRT) techniques are now standard practice. IMRT or volumetric-modulated arc therapy (VMAT) allow treatment of the tumor while simultaneously sparing organs at risk. Nevertheless, treatment plan quality still depends on the physicist's individual skills, experiences, and personal preferences. It would therefore be advantageous to automate the planning process. This possibility is offered by the Pinnacle 3 treatment planning system (Philips Healthcare, Hamburg, Germany) via its scripting language or Auto-Planning (AP) module. AP module results were compared to in-house scripts and manually optimized treatment plans for standard head and neck cancer plans. Multiple treatment parameters were scored to judge plan quality (100 points = optimum plan). Patients were initially planned manually by different physicists and re-planned using scripts or AP. Script-based head and neck plans achieved a mean of 67.0 points and were, on average, superior to manually created (59.1 points) and AP plans (62.3 points). Moreover, they are characterized by reproducibility and lower standard deviation of treatment parameters. Even less experienced staff are able to create at least a good starting point for further optimization in a short time. However, for particular plans, experienced planners perform even better than scripts or AP. Experienced-user input is needed when setting up scripts or AP templates for the first time. Moreover, some minor drawbacks exist, such as the increase of monitor units (+35.5% for scripted plans). On average, automatically created plans are superior to manually created treatment plans. For particular plans, experienced physicists were able to perform better than scripts or AP; thus, the benefit is greatest when time is short or staff inexperienced.
FPGA-based multiprocessor system for injection molding control.
Muñoz-Barron, Benigno; Morales-Velazquez, Luis; Romero-Troncoso, Rene J; Rodriguez-Donate, Carlos; Trejo-Hernandez, Miguel; Benitez-Rangel, Juan P; Osornio-Rios, Roque A
2012-10-18
The plastic industry is a very important manufacturing sector and injection molding is a widely used forming method in that industry. The contribution of this work is the development of a strategy to retrofit control of an injection molding machine based on an embedded system microprocessors sensor network on a field programmable gate array (FPGA) device. Six types of embedded processors are included in the system: a smart-sensor processor, a micro fuzzy logic controller, a programmable logic controller, a system manager, an IO processor and a communication processor. Temperature, pressure and position are controlled by the proposed system and experimentation results show its feasibility and robustness. As validation of the present work, a particular sample was successfully injected.
NASA Technical Reports Server (NTRS)
Rincon, Rafael F.
2008-01-01
The reconfigurable L-Band radar is an ongoing development at NASA/GSFC that exploits the capability inherently in phased array radar systems with a state-of-the-art data acquisition and real-time processor in order to enable multi-mode measurement techniques in a single radar architecture. The development leverages on the L-Band Imaging Scatterometer, a radar system designed for the development and testing of new radar techniques; and the custom-built DBSAR processor, a highly reconfigurable, high speed data acquisition and processing system. The radar modes currently implemented include scatterometer, synthetic aperture radar, and altimetry; and plans to add new modes such as radiometry and bi-static GNSS signals are being formulated. This development is aimed at enhancing the radar remote sensing capabilities for airborne and spaceborne applications in support of Earth Science and planetary exploration This paper describes the design of the radar and processor systems, explains the operational modes, and discusses preliminary measurements and future plans.
Comments on Samal and Henderson: Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swain, M.J.
Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less
Early Market Site Identification Data
Levi Kilcher
2016-04-01
This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).
HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.
Bharath, A; Madhvanath, Sriganesh
2012-04-01
Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center
NASA Astrophysics Data System (ADS)
Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.
2015-12-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
Unit: Petroleum, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
This is a National Trial Print of a unit on petroleum developed for the Australian Science Education Project. The package contains the teacher's edition of the written material and a script for a film entitled "The Extraordinary Experience of Nicholas Nodwell" emphasizing the uses of petroleum and petroleum products in daily life and…
Characteristics of Transverse and Longitudinal Waves.
ERIC Educational Resources Information Center
Reister, W. A.
This monograph presents an autoinstructional program in the physical sciences. It is considered useful at the higher, middle and lower high school levels. Three behavioral objectives are listed and a time allotment of 35-40 minutes is suggested. A bibliography is included. A script, incorporating the use of a cassette player and slides, is used by…
ECommerce: Meeting the Needs of Local Business with Cross-Departmental Education.
ERIC Educational Resources Information Center
Sagi, John P.
This document offers a brief introduction to electronic commerce (known as eCommerce) and explains the challenges and frustrations of developing a course around the topic. ECommerce blends elements of computer science (HTML and JavaScript programming, for example) with traditional business functions, such as marketing, salesmanship, finance and…
ELIPS: Toward a Sensor Fusion Processor on a Chip
NASA Technical Reports Server (NTRS)
Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James
1998-01-01
The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
Kwon, Bomjun J
2012-06-01
This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community-both those with programming backgrounds and those without.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertsch, Adam; Draeger, Erik; Richards, David
2017-01-12
With Sequoia at Lawrence Livermore National Laboratory, researchers explore grand challenging problems and are generating results at scales never before achieved. Sequoia is the first computer to have more than one million processors and is one of the fastest supercomputers in the world.
A high performance linear equation solver on the VPP500 parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi
1994-12-31
This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.
ERIC Educational Resources Information Center
Hwu, Fenfang
2013-01-01
Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…
ERIC Educational Resources Information Center
Ishimaru, Ann M.; Takahashi, Sola
2017-01-01
Partnerships between teachers and parents from nondominant communities hold promise for reducing race- and class-based educational disparities, but the ways families and teachers work together often fall short of delivering systemic change. Racialized institutional scripts provide "taken-for-granted" norms, expectations, and assumptions…
Supporting Component-Based Courseware Development Using Virtual Apparatus Framework Script.
ERIC Educational Resources Information Center
Ip, Albert; Fritze, Paul
This paper reports on the latest development of the Virtual Apparatus (VA) framework, a contribution to efforts at the University of Melbourne (Australia) to mainstream content and pedagogical functions of curricula. The integration of the educational content and pedagogical functions of learning components using an XML compatible script,…
ERIC Educational Resources Information Center
Pleguezuelos, E. M.; Hornos, E.; Dory, V.; Gagnon, R.; Malagrino, P.; Brailovsky, C. A.; Charlin, B.
2013-01-01
Context: The PRACTICUM Institute has developed large-scale international programs of on-line continuing professional development (CPD) based on self-testing and feedback using the Practicum Script Concordance Test© (PSCT). Aims: To examine the psychometric consequences of pooling the responses of panelists from different countries (composite…
Promoting Critical, Elaborative Discussions through a Collaboration Script and Argument Diagrams
ERIC Educational Resources Information Center
Scheuer, Oliver; McLaren, Bruce M.; Weinberger, Armin; Niebuhr, Sabine
2014-01-01
During the past two decades a variety of approaches to support argumentation learning in computer-based learning environments have been investigated. We present an approach that combines argumentation diagramming and collaboration scripts, two methods successfully used in the past individually. The rationale for combining the methods is to…
Enhancing AFLOW Visualization using Jmol
NASA Astrophysics Data System (ADS)
Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration
The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings
SoS Notebook: An Interactive Multi-Language Data Analysis Environment.
Peng, Bo; Wang, Gao; Ma, Jun; Leong, Man Chong; Wakefield, Chris; Melott, James; Chiu, Yulun; Du, Di; Weinstein, John N
2018-05-22
Complex bioinformatic data analysis workflows involving multiple scripts in different languages can be difficult to consolidate, share, and reproduce. An environment that streamlines the entire processes of data collection, analysis, visualization and reporting of such multi-language analyses is currently lacking. We developed Script of Scripts (SoS) Notebook, a web-based notebook environment that allows the use of multiple scripting language in a single notebook, with data flowing freely within and across languages. SoS Notebook enables researchers to perform sophisticated bioinformatic analysis using the most suitable tools for different parts of the workflow, without the limitations of a particular language or complications of cross-language communications. SoS Notebook is hosted at http://vatlab.github.io/SoS/ and is distributed under a BSD license. bpeng@mdanderson.org.
A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics
NASA Astrophysics Data System (ADS)
Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.
2016-12-01
As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.
Reconfigurable signal processor designs for advanced digital array radar systems
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining
2017-05-01
The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.
PixonVision real-time video processor
NASA Astrophysics Data System (ADS)
Puetter, R. C.; Hier, R. G.
2007-09-01
PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.
Spatial Distribution of Star Formation in High Redshift Galaxies
NASA Astrophysics Data System (ADS)
Cunnyngham, Ian; Takamiya, M.; Willmer, C.; Chun, M.; Young, M.
2011-01-01
Integral field unit spectroscopy taken of galaxies with redshifts between 0.6 and 0.8 utilizing Gemini Observatory’s GMOS instrument were used to investigate the spatial distribution of star-forming regions by measuring the Hβ and [OII]λ3727 emission line fluxes. These galaxies were selected based on the strength of Hβ and [OII]λ3727 as measured from slit LRIS/Keck spectra. The process of calibrating and reducing data into cubes -- possessing two spatial dimensions, and one for wavelength -- was automated via a custom batch script using the Gemini IRAF routines. Among these galaxies only the bluest sources clearly show [OII] in the IFU regardless of total galaxy luminosity. The brightest galaxies lack [OII] emission and it is posited that two different modes of star formation exist among this seemingly homogeneous group of z=0.7 star-forming galaxies. In order to increase the galaxy sample to include redshifts from 0.3 to 0.9, public Gemini IFU data are being sought. Python scripts were written to mine the Gemini Science Archive for candidate observations, cross-reference the target of these observations with information from the NASA Extragalactic Database, and then present the resultant database in sortable, searchable, cross-linked web-interface using Django to facilitate navigation. By increasing the sample, we expect to characterize these two different modes of star formation which could be high-redshift counterparts of the U/LIRGs and dwarf starburst galaxies like NGC 1569/NGC 4449. The authors acknowledge funds provided by the National Science Foundation (AST 0909240).
The Daniel K. Inouye College of Pharmacy Scripts
Pezzuto, John M; Ma, Carolyn SJ; Ma, Carolyn
2015-01-01
In partnership with the Hawai‘i Journal of Medicine & Public Health, the Daniel K. Inouye College of Pharmacy (DKICP) is pleased to provide Scripts on a regular basis. In the inaugural “Script,” a brief history of the profession in Hawai‘i was presented up to the founding of the DKICP, Hawai‘i's only academic pharmacy program. In this second part of the inaugural article, we describe some key accomplishments to date. The mission of the College is to educate pharmacy practitioners and leaders to serve as a catalyst for innovations and discoveries in pharmaceutical sciences and practice for promoting health and well-being, and to provide community service, including quality patient care. Examples are given to support the stated goals of the mission. With 341 graduates to date, and a 96% pass rate on the national licensing board exams, the college has played a significant role in improving healthcare in Hawai‘i and throughout the Pacific Region. Additionally, a PhD program with substantial research programs in both pharmacy practice and the pharmaceutical science has been launched. Considerable extramural funding has been garnered from organizations such as the National Institutes of Health and Centers for Medicare and Medicaid Services. The economic impact of the College is estimated to be over $50 million each year. With over 200 signed clinical affiliation agreements within the state as well as nationally and internationally, the DKICP has helped to ameliorate the shortage of pharmacists in the state, and has enhanced the profile and practice standard of the pharmacist's role on interprofessional health care teams. PMID:25821655
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
2012-09-24
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.
Bowleg, Lisa; Burkholder, Gary J; Noar, Seth M; Teti, Michelle; Malebranche, David J; Tschann, Jeanne M
2015-04-01
Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men's HIV risk.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
FPGA-Based Multiprocessor System for Injection Molding Control
Muñoz-Barron, Benigno; Morales-Velazquez, Luis; Romero-Troncoso, Rene J.; Rodriguez-Donate, Carlos; Trejo-Hernandez, Miguel; Benitez-Rangel, Juan P.; Osornio-Rios, Roque A.
2012-01-01
The plastic industry is a very important manufacturing sector and injection molding is a widely used forming method in that industry. The contribution of this work is the development of a strategy to retrofit control of an injection molding machine based on an embedded system microprocessors sensor network on a field programmable gate array (FPGA) device. Six types of embedded processors are included in the system: a smart-sensor processor, a micro fuzzy logic controller, a programmable logic controller, a system manager, an IO processor and a communication processor. Temperature, pressure and position are controlled by the proposed system and experimentation results show its feasibility and robustness. As validation of the present work, a particular sample was successfully injected. PMID:23202036
Designing EvoRoom: An Immersive Simulation Environment for Collective Inquiry in Secondary Science
NASA Astrophysics Data System (ADS)
Lui, Michelle Mei Yee
This dissertation investigates the design of complex inquiry for co-located students to work as a knowledge community within a mixed-reality learning environment. It presents the design of an immersive simulation called EvoRoom and corresponding collective inquiry activities that allow students to explore concepts around topics of evolution and biodiversity in a Grade 11 Biology course. EvoRoom is a room-sized simulation of a rainforest, modeled after Borneo in Southeast Asia, where several projected displays are stitched together to form a large, animated simulation on each opposing wall of the room. This serves to create an immersive environment in which students work collaboratively as individuals, in small groups and a collective community to investigate science topics using the simulations as an evidentiary base. Researchers and a secondary science teacher co-designed a multi-week curriculum that prepared students with preliminary ideas and expertise, then provided them with guided activities within EvoRoom, supported by tablet-based software as well as larger visualizations of their collective progress. Designs encompassed the broader curriculum, as well as all EvoRoom materials (e.g., projected displays, student tablet interfaces, collective visualizations) and activity sequences. This thesis describes a series of three designs that were developed and enacted iteratively over two and a half years, presenting key features that enhanced students' experiences within the immersive environment, their interactions with peers, and their inquiry outcomes. Primary research questions are concerned with the nature of effective design for such activities and environments, and the kinds of interactions that are seen at the individual, collaborative and whole-class levels. The findings fall under one of three themes: 1) the physicality of the room, 2) the pedagogical script for student observation and reflection and collaboration, and 3) ways of including collective visualizations in the activity. Discrete findings demonstrate how the above variables, through their design as inquiry components (i.e., activity, room, scripts and scaffolds on devices, collective visualizations), can mediate the students' interactions with one another, with their teacher, and impact the outcomes of their inquiry. A set of design recommendations is drawn from the results of this research to guide future design or research efforts.
Human factors considerations in the evaluation of processor-based signal and train control systems
DOT National Transportation Integrated Search
2007-06-01
In August 2001, the Federal Railroad Administration issued the notice of proposed rulemaking: Standards for Development and : Use of Processor-Based Signal and Train Control Systems (49 Code of Federal Regulations Part 236). This proposed rule addres...
A fully programmable 100-spin coherent Ising machine with all-to-all connections
NASA Astrophysics Data System (ADS)
McMahon, Peter; Marandi, Alireza; Haribara, Yoshitaka; Hamerly, Ryan; Langrock, Carsten; Tamate, Shuhei; Inagaki, Takahiro; Takesue, Hiroki; Utsunomiya, Shoko; Aihara, Kazuyuki; Byer, Robert; Fejer, Martin; Mabuchi, Hideo; Yamamoto, Yoshihisa
We present a scalable optical processor with electronic feedback, based on networks of optical parametric oscillators. The design of our machine is inspired by adiabatic quantum computers, although it is not an AQC itself. Our prototype machine is able to find exact solutions of, or sample good approximate solutions to, a variety of hard instances of Ising problems with up to 100 spins and 10,000 spin-spin connections. This research was funded by the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) Program of the Council of Science, Technology and Innovation (Cabinet Office, Government of Japan).
NASA Astrophysics Data System (ADS)
Giusi, Giovanni; Liu, Scige J.; Di Giorgio, Anna M.; Galli, Emanuele; Pezzuto, Stefano; Farina, Maria; Spinoglio, Luigi
2014-08-01
SAFARI (SpicA FAR infrared Instrument) is a far-infrared imaging Fourier Transform Spectrometer for the SPICA mission. The Digital Processing Unit (DPU) of the instrument implements the functions of controlling the overall instrument and implementing the science data compression and packing. The DPU design is based on the use of a LEON family processor. In SAFARI, all instrument components are connected to the central DPU via SpaceWire links. On these links science data, housekeeping and commands flows are in some cases multiplexed, therefore the interface control shall be able to cope with variable throughput needs. The effective data transfer workload can be an issue for the overall system performances and becomes a critical parameter for the on-board software design, both at application layer level and at lower, and more HW related, levels. To analyze the system behavior in presence of the expected SAFARI demanding science data flow, we carried out a series of performance tests using the standard GR-CPCI-UT699 LEON3-FT Development Board, provided by Aeroflex/Gaisler, connected to the emulator of the SAFARI science data links, in a point-to-point topology. Two different communication protocols have been used in the tests, the ECSS-E-ST-50-52C RMAP protocol and an internally defined one, the SAFARI internal data handling protocol. An incremental approach has been adopted to measure the system performances at different levels of the communication protocol complexity. In all cases the performance has been evaluated by measuring the CPU workload and the bus latencies. The tests have been executed initially in a custom low level execution environment and finally using the Real- Time Executive for Multiprocessor Systems (RTEMS), which has been selected as the operating system to be used onboard SAFARI. The preliminary results of the carried out performance analysis confirmed the possibility of using a LEON3 CPU processor in the SAFARI DPU, but pointed out, in agreement with previous similar studies, the need of carefully designing the overall architecture to implement some of the DPU functionalities on additional processing devices.
Bowleg, Lisa; Burkholder, Gary J.; Noar, Seth M.; Teti, Michelle; Malebranche, David J.; Tschann, Jeanne M.
2014-01-01
Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men’s HIV risk. PMID:24311105
Autonomous Telemetry Collection for Single-Processor Small Satellites
NASA Technical Reports Server (NTRS)
Speer, Dave
2003-01-01
For the Space Technology 5 mission, which is being developed under NASA's New Millennium Program, a single spacecraft processor will be required to do on-board real-time computations and operations associated with attitude control, up-link and down-link communications, science data processing, solid-state recorder management, power switching and battery charge management, experiment data collection, health and status data collection, etc. Much of the health and status information is in analog form, and each of the analog signals must be routed to the input of an analog-to-digital converter, converted to digital form, and then stored in memory. If the micro-operations of the analog data collection process are implemented in software, the processor may use up a lot of time either waiting for the analog signal to settle, waiting for the analog-to-digital conversion to complete, or servicing a large number of high frequency interrupts. In order to off-load a very busy processor, the collection and digitization of all analog spacecraft health and status data will be done autonomously by a field-programmable gate array that can configure the analog signal chain, control the analog-to-digital converter, and store the converted data in memory.
Resources for comparing the speed and performance of medical autocoders.
Berman, Jules J
2004-06-15
Concept indexing is a popular method for characterizing medical text, and is one of the most important early steps in many data mining efforts. Concept indexing differs from simple word or phrase indexing because concepts are typically represented by a nomenclature code that binds a medical concept to all equivalent representations. A concept search on the term renal cell carcinoma would be expected to find occurrences of hypernephroma, and renal carcinoma (concept equivalents). The purpose of this study is to provide freely available resources to compare speed and performance among different autocoders. These tools consist of: 1) a public domain autocoder written in Perl (a free and open source programming language that installs on any operating system); 2) a nomenclature database derived from the unencumbered subset of the publicly available Unified Medical Language System; 3) a large corpus of autocoded output derived from a publicly available medical text. A simple lexical autocoder was written that parses plain-text into a listing of all 1,2,3, and 4-word strings contained in text, assigning a nomenclature code for text strings that match terms in the nomenclature. The nomenclature used is the unencumbered subset of the 2003 Unified Medical Language System (UMLS). The unencumbered subset of UMLS was reduced to exclude homonymous one-word terms and proper names, resulting in a term/code data dictionary containing about a half million medical terms. The Online Mendelian Inheritance in Man (OMIM), a 92+ Megabyte publicly available medical opus, was used as sample medical text for the autocoder. The autocoding Perl script is remarkably short, consisting of just 38 command lines. The 92+ Megabyte OMIM file was completely autocoded in 869 seconds on a 2.4 GHz processor (less than 10 seconds per Megabyte of text). The autocoded output file (9,540,442 bytes) contains 367,963 coded terms from OMIM and is distributed with this manuscript. A public domain Perl script is provided that can parse through plain-text files of any length, matching concepts against an external nomenclature. The script and associated files can be used freely to compare the speed and performance of autocoding software.
Enabling Future Robotic Missions with Multicore Processors
NASA Technical Reports Server (NTRS)
Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.
2011-01-01
Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.
European Science Notes Information Bulletin Reports on Current European/ Middle Eastern Science
1989-03-01
Palo-Oceanography, Marine Geophysics, Marine Environmental Geology, and Petrology of the Oceanic Crust. The spe- cific concerns of each of these...integration To compute numerically the expected value of an over the fermion fields, leaving an integral over the gauge operator, the configuration space...ethrough the machine (one space point per processor).In the gauge field theories of elementary particles, This is appropriate for generating gauge field
Architecture of a Message-Driven Processor,
1987-11-01
Jon Kaplan, Paul Song, Brian Totty, and Scott Wills Artifcial Intelligence Laboratory -4 Laboratory for Computer Science Massachusetts Institute of...Information Dally, Chao, Chien, Hassoun, Horwat, Kaplan, Song, Totty & Wills: Artificial Intelligence i Laboratory and Laboratory for Computer Science, MIT...applied to a problem if we could are 36 bits long (32 data bits + 4 tag bits) and are used to hold efficiently run programs with a granularity of 5s
Mathieu, Sylvain; Couderc, Marion; Glace, Baptiste; Tournadre, Anne; Malochet-Guinamand, Sandrine; Pereira, Bruno; Dubost, Jean-Jacques; Soubrier, Martin
2013-12-13
The script concordance test (SCT) is a method for assessing clinical reasoning of medical students by placing them in a context of uncertainty such as they will encounter in their future daily practice. Script concordance testing is going to be included as part of the computer-based national ranking examination (iNRE).This study was designed to create a script concordance test in rheumatology and use it for DCEM3 (fifth year) medical students administered via the online platform of the Clermont-Ferrand medical school. Our SCT for rheumatology teaching was constructed by a panel of 19 experts in rheumatology (6 hospital-based and 13 community-based). One hundred seventy-nine DCEM3 (fifth year) medical students were invited to take the test. Scores were computed using the scoring key available on the University of Montreal website. Reliability of the test was estimated by the Cronbach alpha coefficient for internal consistency. The test comprised 60 questions. Among the 26 students who took the test (26/179: 14.5%), 15 completed it in its entirety. The reference panel of rheumatologists obtained a mean score of 76.6 and the 15 students had a mean score of 61.5 (p = 0.001). The Cronbach alpha value was 0.82. An online SCT can be used as an assessment tool for medical students in rheumatology. This study also highlights the active participation of community-based rheumatologists, who accounted for the majority of the 19 experts in the reference panel.A script concordance test in rheumatology for 5th year medical students.
Systems and Methods for Automated Vessel Navigation Using Sea State Prediction
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Howard, Andrew B. (Inventor); Reinhart, Rene Felix (Inventor); Aghazarian, Hrand (Inventor); Rankin, Arturo (Inventor)
2017-01-01
Systems and methods for sea state prediction and autonomous navigation in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes a method of predicting a future sea state including generating a sequence of at least two 3D images of a sea surface using at least two image sensors, detecting peaks and troughs in the 3D images using a processor, identifying at least one wavefront in each 3D image based upon the detected peaks and troughs using the processor, characterizing at least one propagating wave based upon the propagation of wavefronts detected in the sequence of 3D images using the processor, and predicting a future sea state using at least one propagating wave characterizing the propagation of wavefronts in the sequence of 3D images using the processor. Another embodiment includes a method of autonomous vessel navigation based upon a predicted sea state and target location.
Systems and Methods for Automated Vessel Navigation Using Sea State Prediction
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand (Inventor); Reinhart, Rene Felix (Inventor); Huntsberger, Terrance L. (Inventor); Rankin, Arturo (Inventor); Howard, Andrew B. (Inventor)
2015-01-01
Systems and methods for sea state prediction and autonomous navigation in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes a method of predicting a future sea state including generating a sequence of at least two 3D images of a sea surface using at least two image sensors, detecting peaks and troughs in the 3D images using a processor, identifying at least one wavefront in each 3D image based upon the detected peaks and troughs using the processor, characterizing at least one propagating wave based upon the propagation of wavefronts detected in the sequence of 3D images using the processor, and predicting a future sea state using at least one propagating wave characterizing the propagation of wavefronts in the sequence of 3D images using the processor. Another embodiment includes a method of autonomous vessel navigation based upon a predicted sea state and target location.
ERIC Educational Resources Information Center
Stender, Anita; Brückmann, Maja; Neumann, Knut
2017-01-01
This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson…
Mooney, Barbara Logan; Corrales, L René; Clark, Aurora E
2012-03-30
This work discusses scripts for processing molecular simulations data written using the software package R: A Language and Environment for Statistical Computing. These scripts, named moleculaRnetworks, are intended for the geometric and solvent network analysis of aqueous solutes and can be extended to other H-bonded solvents. New algorithms, several of which are based on graph theory, that interrogate the solvent environment about a solute are presented and described. This includes a novel method for identifying the geometric shape adopted by the solvent in the immediate vicinity of the solute and an exploratory approach for describing H-bonding, both based on the PageRank algorithm of Google search fame. The moleculaRnetworks codes include a preprocessor, which distills simulation trajectories into physicochemical data arrays, and an interactive analysis script that enables statistical, trend, and correlation analysis, and other data mining. The goal of these scripts is to increase access to the wealth of structural and dynamical information that can be obtained from molecular simulations. Copyright © 2012 Wiley Periodicals, Inc.
Parallel iterative solution for h and p approximations of the shallow water equations
Barragy, E.J.; Walters, R.A.
1998-01-01
A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.
Predictors of Physical Altercation among Adolescents in Residential Substance Abuse Treatment
Crawley, Rachel D.; Becan, Jennifer Edwards; Knight, Danica Kalling; Joe, George W.; Flynn, Patrick M.
2014-01-01
This study tested the hypothesis that basic social information-processing components represented by family conflict, peer aggression, and pro-aggression cognitive scripts are related to aggression and social problems among adolescents in substance abuse treatment. The sample consisted of 547 adolescents in two community-based residential facilities. Correlation results indicated that more peer aggression is related to more pro-aggression scripts; scripts, peer aggression, and family conflict are associated with social problems; and in-treatment physical altercation involvement is predicted by higher peer aggression. Findings suggest that social information-processing components are valuable for treatment research. PMID:26622072
50 CFR 679.50 - Groundfish Observer Program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... following: (A) Identification of the management, organizational structure, and ownership structure of the.../processors. A catcher/processor will be assigned to a fishery category based on the retained groundfish catch... in Federal waters will be assigned to a fishery category based on the retained groundfish catch...
Emergent Theorisations in Modelling the Teaching of Two Science Teachers
ERIC Educational Resources Information Center
Monteiro, Rute; Carrillo, Jose; Aguaded, Santiago
2008-01-01
The main goal of this study is to understand the teacher's thoughts and action when he/she is immersed in the activity of teaching. To do so, it describes the procedures used to model two teachers' practice with respect to the topic of Plant Diversity. Starting from a consideration of the theoretical constructs of script, routine and…
Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library
ERIC Educational Resources Information Center
Karavirta, Ville; Shaffer, Clifford A.
2016-01-01
Data Structures and Algorithms are a central part of Computer Science. Due to their abstract and dynamic nature, they are a difficult topic to learn for many students. To alleviate these learning difficulties, instructors have turned to algorithm visualizations (AV) and AV systems. Research has shown that especially engaging AVs can have an impact…
Krahé, Barbara; Bieneck, Steffen; Scheinberger-Olwig, Renate
2007-11-01
The characteristic features of adolescents' sexual scripts were explored in 400 tenth and eleventh graders from Berlin, Germany. Participants rated the prototypical elements of three scripts for heterosexual interactions: (1) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to adolescents in general (general script); (2) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to themselves personally (individual script); and (3) the script for a nonconsensual sexual intercourse (rape script). Compared with the general script for the age group as a whole, the individual script contained fewer risk elements related to sexual aggression and portrayed more positive consequences of the sexual interaction. Few gender differences were found, and coital experience did not affect sexual scripts. The rape script was found to be close to the "real rape stereotype." The findings are discussed with respect to the role of sexual scripts as guidelines for behavior, particularly in terms of their significance for the prediction of sexual aggression.
Yes! An object-oriented compiler compiler (YOOCC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avotins, J.; Mingins, C.; Schmidt, H.
1995-12-31
Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grapemore » and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Meizhen; Shi Longzhao; Wang Yuxing
2006-08-15
An inherently nonlinear relation between the output current of the tetralateral position sensitive detector (PSD) and the position of the incident light spot has been found theoretically. Based on single-chip microcomputer and the theoretical relation between output current and position, a new signal processor capable of correcting nonlinearity and reducing position measurement deviation of tetralateral PSD was developed. A tetralateral PSD (S1200, 13x13 mm{sup 2}, Hamamatsu Photonics K.K.) was measured with the new signal processor, a linear relation between the output position of the PSD, and the incident position of the light spot was obtained. In the 60% range ofmore » a 13x13 mm{sup 2} active area, the position nonlinearity (rms) was 0.15% and the position measurement deviation (rms) was {+-}20 {mu}m. Compared with traditional analog signal processor, the new signal processor is of better compatibility, lower cost, higher precision, and easier to be interfaced.« less
NASA Astrophysics Data System (ADS)
Huang, Mei-Zhen; Shi, Long-Zhao; Wang, Yu-Xing; Ni, Yi; Li, Zhen-Qing; Ding, Hai-Feng
2006-08-01
An inherently nonlinear relation between the output current of the tetralateral position sensitive detector (PSD) and the position of the incident light spot has been found theoretically. Based on single-chip microcomputer and the theoretical relation between output current and position, a new signal processor capable of correcting nonlinearity and reducing position measurement deviation of tetralateral PSD was developed. A tetralateral PSD (S1200, 13×13mm2, Hamamatsu Photonics K.K.) was measured with the new signal processor, a linear relation between the output position of the PSD, and the incident position of the light spot was obtained. In the 60% range of a 13×13mm2 active area, the position nonlinearity (rms) was 0.15% and the position measurement deviation (rms) was ±20μm. Compared with traditional analog signal processor, the new signal processor is of better compatibility, lower cost, higher precision, and easier to be interfaced.
Real-time phase correlation based integrated system for seizure detection
NASA Astrophysics Data System (ADS)
Romaine, James B.; Delgado-Restituto, Manuel; Leñero-Bardallo, Juan A.; Rodríguez-Vázquez, Ángel
2017-05-01
This paper reports a low area, low power, integer-based digital processor for the calculation of phase synchronization between two neural signals. The processor calculates the phase-frequency content of a signal by identifying the specific time periods associated with two consecutive minima. The simplicity of this phase-frequency content identifier allows for the digital processor to utilize only basic digital blocks, such as registers, counters, adders and subtractors, without incorporating any complex multiplication and or division algorithms. In fact, the processor, fabricated in a 0.18μm CMOS process, only occupies an area of 0.0625μm2 and consumes 12.5nW from a 1.2V supply voltage when operated at 128kHz. These low-area, low-power features make the proposed processor a valuable computing element in closed loop neural prosthesis for the treatment of neural diseases, such as epilepsy, or for extracting functional connectivity maps between different recording sites in the brain.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Miniature Fuel Processors for Portable Fuel Cell Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Jamie D.; Jones, Evan O.; Palo, Daniel R.
2003-06-02
Miniature and micro-scale fuel processors are discussed. The enabling technologies for these devices are the novel catalysts and the micro-technology-based designs. The novel catalyst allows for methanol reforming at high gas hourly space velocities of 50,000 hr-1 or higher, while maintaining a carbon monoxide levels at 1% or less. The micro-technology-based designs enable the devices to be extremely compact and lightweight. The miniature fuel processors can nominally provide between 25-50 watts equivalent of hydrogen which is ample for soldier or personal portable power supplies. The integrated processors have a volume less than 50 cm3, a mass less than 150 grams,more » and thermal efficiencies of up to 83%. With reasonable assumptions on fuel cell efficiencies, anode gas and water management, parasitic power loss, etc., the energy density was estimated at 1700 Whr/kg. The miniature processors have been demonstrated with a carbon monoxide clean-up method and a fuel cell stack. The micro-scale fuel processors have been designed to provide up to 0.3 watt equivalent of power with efficiencies over 20%. They have a volume of less than 0.25 cm3 and a mass of less than 1 gram.« less
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
A GPU accelerated PDF transparency engine
NASA Astrophysics Data System (ADS)
Recker, John; Lin, I.-Jong; Tastl, Ingeborg
2011-01-01
As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.
Complementary filter implementation in the dynamic language Lua
NASA Astrophysics Data System (ADS)
Sadowski, Damian; Sawicki, Aleksander; Lukšys, Donatas; Slanina, Zdenek
2017-08-01
The article presents the complementary filter implementation, that is used for the estimation of the pitch angle, in Lua script language. Inertial sensors as accelerometer and gyroscope were used in the study. Methods of angles estimation using acceleration and angular velocity sensors were presented in the theoretical part of the article. The operating principle of complementary filter has been presented. The prototype of Butterworth's analogue filter and its digital equivalent have been designed. Practical implementation of the issue was performed with the use of PC and DISCOVERY evaluation board equipped with STM32F01 processor, L3GD20 gyroscope and LS303DLHC accelerometer. Measurement data was transmitted by UART serial interface, then processed with the use of Lua software and luaRS232 programming library. Practical implementation was divided into two stages. In the first part, measurement data has been recorded and then processed with help of a complementary filter. In the second step, coroutines mechanism was used to filter data in real time.
NASA Astrophysics Data System (ADS)
Adler, D. S.
2000-12-01
The Science Planning and Scheduling Team (SPST) of the Space Telescope Science Institute (STScI) has historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SPST is in the process of transitioning to Unix operations. In the summer of 1999, SPST selected Python as the primary scripting language for the operational tools and began translation of the VMS DCL code. As of October 2000, SPST has installed a utility library of 16 modules consisting of 8000 lines of code and 80 Python tools consisting of 13000 lines of code. All tasks related to calendar generation have been switched to Unix operations. Current work focuses on translating the tools used to generate the Science Mission Specifications (SMS). The software required to generate the Mission Schedule and Command Loads (PASS), maintained by another team at STScI, will take longer to translate than the rest of the SPST operational code. SPST is planning on creating tools to access PASS from Unix in the short term. We are on schedule to complete the work needed to fully transition SPST to Unix operations (while remotely accessing PASS on VMS) by the fall of 2001.
Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.
Muin, Michael; Fontelo, Paul
2006-11-03
The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.
NASA Astrophysics Data System (ADS)
Arestova, M. L.; Bykovskii, A. Yu
1995-10-01
An architecture is proposed for a specialised optoelectronic multivalued logic processor based on the Allen—Givone algebra. The processor is intended for multiparametric processing of data arriving from a large number of sensors or for tackling spectral analysis tasks. The processor architecture makes it possible to obtain an approximate general estimate of the state of an object being diagnosed on a p-level scale. Optoelectronic systems are proposed for MAXIMUM, MINIMUM, and LITERAL logic gates, based on optical-frequency encoding of logic levels. Corresponding logic gates form a complete set of logic functions in the Allen—Givone algebra.
The SPAR thermal analyzer: Present and future
NASA Astrophysics Data System (ADS)
Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.
The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.
The SPAR thermal analyzer: Present and future
NASA Technical Reports Server (NTRS)
Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.
1982-01-01
The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.
Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.
2015-01-01
Based on a sub-sample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this paper reports data from a follow-up assessment at age 18 years on the antecedents of secure base script knowledge, as reflected in the ability to generate narratives in which attachment-related difficulties are recognized, competent help is provided, and the problem is resolved. Secure base script knowledge was (a) modestly to moderately correlated with more well established assessments of adult attachment, (b) associated with mother-child attachment in the first three years of life and with observations of maternal and paternal sensitivity from childhood to adolescence, and (c) partially accounted for associations previously documented in the SECCYD cohort between early caregiving experiences and Adult Attachment Interview states of mind (Booth-LaForce & Roisman, 2014) as well as self-reported attachment styles (Fraley, Roisman, Booth-LaForce, Owen, & Holland, 2013). PMID:25264703
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm
NASA Astrophysics Data System (ADS)
Kumar, Chandan Jyoti; Kalita, Sanjib Kr.
Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
Parallel processing approach to transform-based image coding
NASA Astrophysics Data System (ADS)
Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.
1991-06-01
This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.
General optical discrete z transform: design and application.
Ngo, Nam Quoc
2016-12-20
This paper presents a generalization of the discrete z transform algorithm. It is shown that the GOD-ZT algorithm is a generalization of several important conventional discrete transforms. Based on the GOD-ZT algorithm, a tunable general optical discrete z transform (GOD-ZT) processor is synthesized using the silica-based finite impulse response transversal filter. To demonstrate the effectiveness of the method, the design and simulation of a tunable optical discrete Fourier transform (ODFT) processor as a special case of the synthesized GOD-ZT processor is presented. It is also shown that the ODFT processor can function as a real-time optical spectrum analyzer. The tunable ODFT has an important potential application as a tunable optical demultiplexer at the receiver end of an optical orthogonal frequency-division multiplexing transmission system.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
Managing an archive of weather satellite images
NASA Technical Reports Server (NTRS)
Seaman, R. L.
1992-01-01
The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
Cabrera, Álvaro Cortés; Gil-Redondo, Rubén; Perona, Almudena; Gago, Federico; Morreale, Antonio
2011-09-01
A graphical user interface (GUI) for our previously published virtual screening (VS) and data management platform VSDMIP (Gil-Redondo et al. J Comput Aided Mol Design, 23:171-184, 2009) that has been developed as a plugin for the popular molecular visualization program PyMOL is presented. In addition, a ligand-based VS module (LBVS) has been implemented that complements the already existing structure-based VS (SBVS) module and can be used in those cases where the receptor's 3D structure is not known or for pre-filtering purposes. This updated version of VSDMIP is placed in the context of similar available software and its LBVS and SBVS capabilities are tested here on a reduced set of the Directory of Useful Decoys database. Comparison of results from both approaches confirms the trend found in previous studies that LBVS outperforms SBVS. We also show that by combining LBVS and SBVS, and using a cluster of ~100 modern processors, it is possible to perform complete VS studies of several million molecules in less than a month. As the main processes in VSDMIP are 100% scalable, more powerful processors and larger clusters would notably decrease this time span. The plugin is distributed under an academic license upon request from the authors. © Springer Science+Business Media B.V. 2011
AMS data production facilities at science operations center at CERN
NASA Astrophysics Data System (ADS)
Choutko, V.; Egorov, A.; Eline, A.; Shan, B.
2017-10-01
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.
A word processor optimized for preparing journal articles and student papers.
Wolach, A H; McHale, M A
2001-11-01
A new Windows-based word processor for preparing journal articles and student papers is described. In addition to standard features found in word processors, the present word processor provides specific help in preparing manuscripts. Clicking on "Reference Help (APA Form)" in the "File" menu provides a detailed help system for entering the references in a journal article. Clicking on "Examples and Explanations of APA Form" provides a help system with examples of the various sections of a review article, journal article that has one experiment, or journal article that has two or more experiments. The word processor can automatically place the manuscript page header and page number at the top of each page using the form required by APA and Psychonomic Society journals. The "APA Form" submenu of the "Help" menu provides detailed information about how the word processor is optimized for preparing articles and papers.
System support software for the Space Ultrareliable Modular Computer (SUMC)
NASA Technical Reports Server (NTRS)
Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.
1974-01-01
The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.
Electrical Prototype Power Processor for the 30-cm Mercury electric propulsion engine
NASA Technical Reports Server (NTRS)
Biess, J. J.; Frye, R. J.
1978-01-01
An Electrical Prototpye Power Processor has been designed to the latest electrical and performance requirements for a flight-type 30-cm ion engine and includes all the necessary power, command, telemetry and control interfaces for a typical electric propulsion subsystem. The power processor was configured into seven separate mechanical modules that would allow subassembly fabrication, test and integration into a complete power processor unit assembly. The conceptual mechanical packaging of the electrical prototype power processor unit demonstrated the relative location of power, high voltage and control electronic components to minimize electrical interactions and to provide adequate thermal control in a vacuum environment. Thermal control was accomplished with a heat pipe simulator attached to the base of the modules.
Drama as Arts-Based Pedagogy and Research: Media Advertising and Inner-City Youth.
ERIC Educational Resources Information Center
Conrad, Diane
2002-01-01
A media unit for inner city high school students examined the relationship between youth and advertising by using drama as the medium through which learning and research occurred. Data were presented through scripted dramatic scenes. How the interpretation and generation of data were embedded in the process of writing these scripts is explained.…
A Model for Flexibly Editing CSCL Scripts
ERIC Educational Resources Information Center
Sobreira, Pericles; Tchounikine, Pierre
2012-01-01
This article presents a model whose primary concern and design rationale is to offer users (teachers) with basic ICT skills an intuitive, easy, and flexible way of editing scripts. The proposal is based on relating an end-user representation as a table and a machine model as a tree. The table-tree model introduces structural expressiveness and…
ERIC Educational Resources Information Center
Hong, Zeng-Wei; Chen, Yen-Lin; Lan, Chien-Ho
2014-01-01
Animated agents are virtual characters who demonstrate facial expressions, gestures, movements, and speech to facilitate students' engagement in the learning environment. Our research developed a courseware that supports a XML-based markup language and an authoring tool for teachers to script animated pedagogical agents in teaching materials. The…
ERIC Educational Resources Information Center
Ladd, Melissa
2016-01-01
This study strived to determine the effectiveness of the AR phonics program relative to the effectiveness of the scripted phonics program for developing the letter identification, sound verbalization, and blending abilities of kindergarten students considered at-risk based on state assessments. The researcher was interested in pretest and posttest…
ERIC Educational Resources Information Center
Toppel, Kathryn Elizabeth
2013-01-01
The increased focus on the implementation of scientifically research-based instruction as an outcome of No Child Left Behind ("Understanding NCLB," 2007) has resulted in the widespread use of scripted reading curricula (Dewitz, Leahy, Jones, and Sullivan, 2010), which typically represents Eurocentric and middle class forms of discourse,…
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2011 CFR
2011-10-01
..., exposure scenarios, and consequences that are related as described in this part. For the full risk... subsystem or component in the risk assessment. (f) How are processor-based subsystems/components assessed? (1) An MTTHE value must be calculated for each processor-based subsystem or component, or both...
SSC 254 Screen-Based Word Processors: Production Tests. The Lanier Word Processor.
ERIC Educational Resources Information Center
Moyer, Ruth A.
Designed for use in Trident Technical College's Secretarial Lab, this series of 12 production tests focuses on the use of the Lanier Word Processor for a variety of tasks. In tests 1 and 2, students are required to type and print out letters. Tests 3 through 8 require students to reformat a text; make corrections on a letter; divide and combine…
Utilizing the ISS Mission as a Testbed to Develop Cognitive Communications Systems
NASA Technical Reports Server (NTRS)
Jackson, Dan
2016-01-01
The ISS provides an excellent opportunity for pioneering artificial intelligence software to meet the challenges of real-time communications (comm) link management. This opportunity empowers the ISS Program to forge a testbed for developing cognitive communications systems for the benefit of the ISS mission, manned Low Earth Orbit (LEO) science programs and future planetary exploration programs. In November, 1998, the Flight Operations Directorate (FOD) started the ISS Antenna Manager (IAM) project to develop a single processor supporting multiple comm satellite tracking for two different antenna systems. Further, the processor was developed to be highly adaptable as it supported the ISS mission through all assembly stages. The ISS mission mandated communications specialists with complete knowledge of when the ISS was about to lose or gain comm link service. The current specialty mandated cognizance of large sun-tracking solar arrays and thermal management panels in addition to the highly-dynamic satellite service schedules and rise/set tables. This mission requirement makes the ISS the ideal communications management analogue for future LEO space station and long-duration planetary exploration missions. Future missions, with their precision-pointed, dynamic, laser-based comm links, require complete autonomy for managing high-data rate communications systems. Development of cognitive communications management systems that permit any crew member or payload science specialist, regardless of experience level, to control communications is one of the greater benefits the ISS can offer new space exploration programs. The IAM project met a new mission requirement never previously levied against US space-born communications systems management: process and display the orientation of large solar arrays and thermal control panels based on real-time joint angle telemetry. However, IAM leaves the actual communications availability assessment to human judgement, which introduces unwanted variability because each specialist has a different core of experience with comm link performance. Because the ISS utilizes two different frequency bands, dynamic structure can be occasionally translucent at one frequency while it can completely interdict service at the other frequency. The impact of articulating structure on the comm link can depend on its orientation at the time it impinges on the link. It can become easy for a human specialist to cross-associate experience at one frequency with experience at the other frequency. Additionally, the specialist's experience is incremental, occurring one nine-hour shift at a time. Only the IAM processor experiences the complete 24x7x365 communications link performance for both communications links but, it has no "learning capability." If the IAM processor could be endowed with a cognitive ability to remember past structure-induced comm link outages, based on its knowledge of the ISS position, attitude, communications gear, array joint angles and tracking accuracy, it could convey such experience to the human operator. It could also use its learned communications link behaviors to accurately convey the availability of future communications sessions. Further, the tool could remember how accurately or inaccurately it predicted availability and correct future predictions based on past performance. The IAM tool could learn frequency-specific impacts due to spacecraft structures and pass that information along as "experience." Such development would provide a single artificial intelligence processor that could provide two different experience bases. If it also "knew" the satellite service schedule, it could distinguish structure blockage from schedule or planet blockage and then quickly switch to another satellite. Alternatively, just as a human operator could judge, a cognizant comm system based on the IAM model could "know" that the blockage is not going to last very long and continue tracking a comm satellite, waiting for it to track away from structure. Ultimately, once this capability was fully developed and tested in the Mission Control Center, it could be transferred on-orbit to support development of operations concepts that include more advanced cognitive communications systems. Future applications of this capability are easily foreseen because even more dynamic satellite constellations with more nodes and greater capability are coming. Currently, the ISS fully employs its high-data-rate return link for harvesting payload science. In the coming months, it will double that data rate and is forecast to fully utilize that capability. Already there is talk of an upgrade that quadruples the current data rate allocated to ISS payload science before the end of its mission and laser comm links have already been tested from the ISS. Every data rate upgrade mandates more complicated and sensitive communications equipment which implies greater expertise invested in the human operator. Future on-orbit cognizant comm systems will be needed to meet greater performance demands aboard larger, far more complicated spacecraft. In the LEO environment, the old-style one-satellite-per-spacecraft operations concept will give way to a new concept of a single customer spacecraft simultaneously using multiple comm satellites. Much more highly-dynamic manned LEO missions with decades of crew members potentially increase the demand for communications link performance. A cognizant on-board communications system will meet advanced communications demands from future LEO missions and future planetary missions. The ISS has fledgling components of future exploration programs, both LEO and planetary. Further, the Flight Operations Directorate, through the IAM project, has already begun to develop a communications management system that attempts to solve advanced problems ideally represented by dynamic structure impacting scheduled satellite service. With an earnest project to integrate artificial intelligence into the IAM processor, the ISS Program could develop a cognizant communications system that could be adapted and transferred to future on-orbit avionics designs.
Soil CO2 flux from three ecosystems in tropical peatland of Sarawak, Malaysia
NASA Astrophysics Data System (ADS)
Melling, Lulie; Hatano, Ryusuke; Goh, Kah Joo
2005-02-01
Soil CO2 flux was measured monthly over a year from tropical peatland of Sarawak, Malaysia using a closed-chamber technique. The soil CO2 flux ranged from 100 to 533 mg C m
2 h
1 for the forest ecosystem, 63 to 245 mg C m
2 h
1 for the sago and 46 to 335 mg C m
2 h
1 for the oil palm. Based on principal component analysis (PCA), the environmental variables over all sites could be classified into three components, namely, climate, soil moisture and soil bulk density, which accounted for 86% of the seasonal variability. A regression tree approach showed that CO2 flux in each ecosystem was related to different underlying environmental factors. They were relative humidity for forest, soil temperature at 5 cm for sago and water-filled pore space for oil palm. On an annual basis, the soil CO2 flux was highest in the forest ecosystem with an estimated production of 2.1 kg C m
2 yr
1 followed by oil palm at 1.5 kg C m
2 yr
1 and sago at 1.1 kg C m
2 yr
1. The different dominant controlling factors in CO2 flux among the studied ecosystems suggested that land use affected the exchange of CO2 between tropical peatland and the atmosphere.
Event processing in X-IFU detector onboard Athena.
NASA Astrophysics Data System (ADS)
Ceballos, M. T.; Cobos, B.; van der Kuurs, J.; Fraga-Encinas, R.
2015-05-01
The X-ray Observatory ATHENA was proposed in April 2014 as the mission to implement the science theme "The Hot and Energetic Universe" selected by ESA for L2 (the second Large-class mission in ESA's Cosmic Vision science programme). One of the two X-ray detectors designed to be onboard ATHENA is X-IFU, a cryogenic microcalorimeter based on Transition Edge Sensor (TES) technology that will provide spatially resolved high-resolution spectroscopy. X-IFU will be developed by a consortium of European research institutions currently from France (leadership), Italy, The Netherlands, Belgium, UK, Germany and Spain. From Spain, IFCA (CSIC-UC) is involved in the Digital Readout Electronics (DRE) unit of the X-IFU detector, in particular in the Event Processor Subsytem. We at IFCA are in charge of the development and implementation in the DRE unit of the Event Processing algorithms, designed to recognize, from a noisy signal, the intensity pulses generated by the absorption of the X-ray photons, and lately extract their main parameters (coordinates, energy, arrival time, grade, etc.) Here we will present the design and performance of the algorithms developed for the event recognition (adjusted derivative), and pulse grading/qualification as well as the progress in the algorithms designed to extract the energy content of the pulses (pulse optimal filtering). IFCA will finally have the responsibility of the implementation on board in the (TBD) FPGAs or micro-processors of the DRE unit, where this Event Processing part will take place, to fit into the limited telemetry of the instrument.
Simulation analysis of a microcomputer-based, low-cost Omega navigation system
NASA Technical Reports Server (NTRS)
Lilley, R. W.; Salter, R. J., Jr.
1976-01-01
The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.
Map_plot and bgg_plot: software for integration of geoscience datasets
NASA Astrophysics Data System (ADS)
Gaillot, Philippe; Punongbayan, Jane T.; Rea, Brice
2004-02-01
Since 1985, the Ocean Drilling Program (ODP) has been supporting multidisciplinary research in exploring the structure and history of Earth beneath the oceans. After more than 200 Legs, complementary datasets covering different geological environments, periods and space scales have been obtained and distributed world-wide using the ODP-Janus and Lamont Doherty Earth Observatory-Borehole Research Group (LDEO-BRG) database servers. In Earth Sciences, more than in any other science, the ensemble of these data is characterized by heterogeneous formats and graphical representation modes. In order to fully and quickly assess this information, a set of Unix/Linux and Generic Mapping Tool-based C programs has been designed to convert and integrate datasets acquired during the present ODP and the future Integrated ODP (IODP) Legs. Using ODP Leg 199 datasets, we show examples of the capabilities of the proposed programs. The program map_plot is used to easily display datasets onto 2-D maps. The program bgg_plot (borehole geology and geophysics plot) displays data with respect to depth and/or time. The latter program includes depth shifting, filtering and plotting of core summary information, continuous and discrete-sample core measurements (e.g. physical properties, geochemistry, etc.), in situ continuous logs, magneto- and bio-stratigraphies, specific sedimentological analyses (lithology, grain size, texture, porosity, etc.), as well as core and borehole wall images. Outputs from both programs are initially produced in PostScript format that can be easily converted to Portable Document Format (PDF) or standard image formats (GIF, JPEG, etc.) using widely distributed conversion programs. Based on command line operations and customization of parameter files, these programs can be included in other shell- or database-scripts, automating plotting procedures of data requests. As an open source software, these programs can be customized and interfaced to fulfill any specific plotting need of geoscientists using ODP-like datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Imam, Neena
2007-01-01
Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less
Implementing a distributed intranet-based information system.
O'Kane, K C; McColligan, E E; Davis, G A
1996-11-01
The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054
A microprocessor based high speed packet switch for satellite communications
NASA Technical Reports Server (NTRS)
Arozullah, M.; Crist, S. C.
1980-01-01
The architectures of a single processor, a three processor, and a multiple processor system are described. The hardware circuits, and software routines required for implementing the three and multiple processor designs are presented. A bit-slice microprocessor was designed and microprogrammed. Maximum throughput was calculated for all three designs. Queue theoretic models for these three designs were developed and utilized to obtain analytical expressions for the average waiting times, overall average response times and average queue sizes. From these expressions, graphs were obtained showing the effect on the system performance of a number of design parameters.
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Background: Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. Materials and Methods: This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations’ performance and is carried out in the form of teamwork called participatory research. Results: Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. Conclusion: we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research. PMID:23555113
The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died. How Similar Is Big Food?
Brownell, Kelly D; Warner, Kenneth E
2009-01-01
Context: In 1954 the tobacco industry paid to publish the “Frank Statement to Cigarette Smokers” in hundreds of U.S. newspapers. It stated that the public's health was the industry's concern above all others and promised a variety of good-faith changes. What followed were decades of deceit and actions that cost millions of lives. In the hope that the food history will be written differently, this article both highlights important lessons that can be learned from the tobacco experience and recommends actions for the food industry. Methods: A review and analysis of empirical and historical evidence pertaining to tobacco and food industry practices, messages, and strategies to influence public opinion, legislation and regulation, litigation, and the conduct of science. Findings: The tobacco industry had a playbook, a script, that emphasized personal responsibility, paying scientists who delivered research that instilled doubt, criticizing the “junk” science that found harms associated with smoking, making self-regulatory pledges, lobbying with massive resources to stifle government action, introducing “safer” products, and simultaneously manipulating and denying both the addictive nature of their products and their marketing to children. The script of the food industry is both similar to and different from the tobacco industry script. Conclusions: Food is obviously different from tobacco, and the food industry differs from tobacco companies in important ways, but there also are significant similarities in the actions that these industries have taken in response to concern that their products cause harm. Because obesity is now a major global problem, the world cannot afford a repeat of the tobacco history, in which industry talks about the moral high ground but does not occupy it. PMID:19298423
An interactive HTML ocean nowcast GUI based on Perl and JavaScript
NASA Astrophysics Data System (ADS)
Sakalaukus, Peter J.; Fox, Daniel N.; Louise Perkins, A.; Smedstad, Lucy F.
1999-02-01
We describe the use of Hyper Text Markup Language (HTML), JavaScript code, and Perl I/O to create and validate forms in an Internet-based graphical user interface (GUI) for the Naval Research Laboratory (NRL) Ocean models and Assimilation Demonstration System (NOMADS). The resulting nowcast system can be operated from any compatible browser across the Internet, for although the GUI was prepared in a Netscape browser, it used no Netscape extensions. Code available at: http://www.iamg.org/CGEditor/index.htm
Meteorite, a rock from space: A planetarium adventure for children
NASA Astrophysics Data System (ADS)
Rodríguez Hidalgo, I.; Naveros Y Naveiras, R.; González Sánchez, O.
2008-06-01
At the Museum of the Science and the Cosmos (MCC, La Laguna, Tenerife) there is a small planetarium. All the different planetarium shows are carried out entirely by the Museum staff, from the original idea and the script to the final production. In February 2007, Meteorite, a rock from space, a new show, specifically for children, was released. The characters (astronomical bodies) are played by puppets, designed and manufactured for this occasion; the script has been carefully written, and introduces many astronomical concepts in the form of an entertaining tale, which encourages the children to participate by crying, counting, helping the characters - just like a traditional puppet show. The aim of this contribution is to review the different resources (some of them really innovative) used to create this programme, which offers plenty of future possibilities.
SPP: A data base processor data communications protocol
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1983-01-01
The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.
Primary Pre-Service Teachers' Skills in Planning a Guided Scientific Inquiry
ERIC Educational Resources Information Center
García-Carmona, Antonio; Criado, Ana M.; Cruz-Guzmán, Marta
2017-01-01
A study is presented of the skills that primary pre-service teachers (PPTs) have in completing the planning of a scientific inquiry on the basis of a guiding script. The sample comprised 66 PPTs who constituted a group-class of the subject "Science Teaching," taught in the second year of an undergraduate degree in primary education at a…
Variables, Decisions, and Scripting in Construct
2009-09-01
grounded in sociology and cognitive science which seeks to model the processes and situations by which humans interact and share information...Construct is an embodiment of constructuralism (Carley 1986), a theory which posits that human social structures and cognitive structures co-evolve so that...human cognition reflects human social behavior, and that human social behavior simultaneously influences cognitive processes. Recent work with
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
Social.Water--Open Source Citizen Science Software for CrowdHydrology
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Lowry, C.
2013-12-01
CrowdHydrology is a crowd-sourced citizen science project in which passersby near streams are encouraged to read a gage and send an SMS (text) message with the water level to a number indicated on a sign. The project was initially started using free services such as Google Voice, Gmail, and Google Maps to acquire and present the data on the internet. Social.Water is open-source software, using Python and JavaScript, that automates the acquisition, categorization, and presentation of the data. Open-source objectives pervade both the project and the software as the code is hosted at Github, only free scripting codes are used, and any person or organization can install a gage and join the CrowdHydrology network. In the first year, 10 sites were deployed in upstate New York, USA. In the second year, expansion to 44 sites throughout the upper Midwest USA was achieved. Comparison with official USGS and academic measurements have shown low error rates. Citizen participation varies greatly from site to site, so surveys or other social information is sought for insight into why some sites experience higher rates of participation than others.
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
Everware toolkit. Supporting reproducible science and challenge-driven education.
NASA Astrophysics Data System (ADS)
Ustyuzhanin, A.; Head, T.; Babuschkin, I.; Tiunov, A.
2017-10-01
Modern science clearly demands for a higher level of reproducibility and collaboration. To make research fully reproducible one has to take care of several aspects: research protocol description, data access, environment preservation, workflow pipeline, and analysis script preservation. Version control systems like git help with the workflow and analysis scripts part. Virtualization techniques like Docker or Vagrant can help deal with environments. Jupyter notebooks are a powerful platform for conducting research in a collaborative manner. We present project Everware that seamlessly integrates git repository management systems such as Github or Gitlab, Docker and Jupyter helping with a) sharing results of real research and b) boosts education activities. With the help of Everware one can not only share the final artifacts of research but all the depth of the research process. This been shown to be extremely helpful during organization of several data analysis hackathons and machine learning schools. Using Everware participants could start from an existing solution instead of starting from scratch. They could start contributing immediately. Everware allows its users to make use of their own computational resources to run the workflows they are interested in, which leads to higher scalability of the toolkit.
A Software Engineering Paradigm for Quick-turnaround Earth Science Data Projects
NASA Astrophysics Data System (ADS)
Moore, K.
2016-12-01
As is generally the case with applied sciences professional and educational programs, the participants of such programs can come from a variety of technical backgrounds. In the NASA DEVELOP National Program, the participants constitute an interdisciplinary set of backgrounds, with varying levels of experience with computer programming. DEVELOP makes use of geographically explicit data sets, and it is necessary to use geographic information systems and geospatial image processing environments. As data sets cover longer time spans and include more complex sets of parameters, automation is becoming an increasingly prevalent feature. Though platforms such as ArcGIS, ERDAS Imagine, and ENVI facilitate the batch-processing of geospatial imagery, these environments are naturally constricting to the user in that they limit him or her to the tools that are available. Users must then turn to "homemade" scripting in more traditional programming languages such as Python, JavaScript, or R, to automate workflows. However, in the context of quick-turnaround projects like those in DEVELOP, the programming learning curve may be prohibitively steep. In this work, we consider how to best design a software development paradigm that addresses two major constants: an arbitrarily experienced programmer and quick-turnaround project timelines.
Frijling, Jessie L; van Zuiden, Mirjam; Koch, Saskia B J; Nawijn, Laura; Veltman, Dick J; Olff, Miranda
2016-04-01
Approximately 10% of trauma-exposed individuals go on to develop post-traumatic stress disorder (PTSD). Neural emotion regulation may be etiologically involved in PTSD development. Oxytocin administration early post-trauma may be a promising avenue for PTSD prevention, as intranasal oxytocin has previously been found to affect emotion regulation networks in healthy individuals and psychiatric patients. In a randomized double-blind placebo-controlled between-subjects functional magnetic resonance (fMRI) study, we assessed the effects of a single intranasal oxytocin administration (40 IU) on seed-based amygdala resting-state FC with emotion regulation areas (ventromedial prefrontal cortex (vmPFC), ventrolateral prefrontal cortex (vlPFC)), and salience processing areas (insula, dorsal anterior cingulate cortex (dACC)) in 37 individuals within 11 days post trauma. Two resting-state scans were acquired; one after neutral- and one after trauma-script-driven imagery. We found that oxytocin administration reduced amygdala-left vlPFC FC after trauma script-driven imagery, compared with neutral script-driven imagery, whereas in PL-treated participants enhanced amygdala-left vlPFC FC was observed following trauma script-driven imagery. Irrespective of script condition, oxytocin increased amygdala-insula FC and decreased amygdala-vmPFC FC. These neural effects were accompanied by lower levels of sleepiness and higher flashback intensity in the oxytocin group after the trauma script. Together, our findings show that oxytocin administration may impede emotion regulation network functioning in response to trauma reminders in recently trauma-exposed individuals. Therefore, caution may be warranted in administering oxytocin to prevent PTSD in distressed, recently trauma-exposed individuals.
Active non-volatile memory post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish
A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.
The Event Based Language and Its Multiple Processor Implementations.
1980-01-01
10 6.1 "Recursive" Linear Fibonacci ................................................ 105 6.2 The Readers Writers Problem...kinds. Examples of such systems are: C.mmp [Wu-72], Pluribus [He-73], Data Flow [ De -75], the boolean n-cube parallel machine [Su-77], and the MuNet [Wa...concurrency within programs; therefore, we hate concentrated on two types of systems which seem suitable: a processor network, and a data flow processor [ De -77
Onboard processor technology review
NASA Technical Reports Server (NTRS)
Benz, Harry F.
1990-01-01
The general need and requirements for the onboard embedded processors necessary to control and manipulate data in spacecraft systems are discussed. The current known requirements are reviewed from a user perspective, based on current practices in the spacecraft development process. The current capabilities of available processor technologies are then discussed, and these are projected to the generation of spacecraft computers currently under identified, funded development. An appraisal is provided for the current national developmental effort.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Park, Daejin; Cho, Jeonghun
2014-01-01
A specially designed sensor processor used as a main processor in IoT (internet-of-thing) device for the rare-event sensing applications is proposed. The IoT device including the proposed sensor processor performs the event-driven sensor data processing based on an accuracy-energy configurable event-quantization in architectural level. The received sensor signal is converted into a sequence of atomic events, which is extracted by the signal-to-atomic-event generator (AEG). Using an event signal processing unit (EPU) as an accelerator, the extracted atomic events are analyzed to build the final event. Instead of the sampled raw data transmission via internet, the proposed method delays the communication with a host system until a semantic pattern of the signal is identified as a final event. The proposed processor is implemented on a single chip, which is tightly coupled in bus connection level with a microcontroller using a 0.18 μm CMOS embedded-flash process. For experimental results, we evaluated the proposed sensor processor by using an IR- (infrared radio-) based signal reflection and sensor signal acquisition system. We successfully demonstrated that the expected power consumption is in the range of 20% to 50% compared to the result of the basement in case of allowing 10% accuracy error.
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches
Muin, Michael; Fontelo, Paul
2006-01-01
Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729
ERIC Educational Resources Information Center
Gagnon, Robert; Lubarsky, Stuart; Lambert, Carole; Charlin, Bernard
2011-01-01
The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool's discriminatory power, but deviant answers could potentially diminish the…
Semantic Memory Organization in Young Children: The Script-Based Categorization of Early Words.
ERIC Educational Resources Information Center
Maaka, Margaret J.; Wong, Eddie K.
This study examined whether scripts provide a basis for the categories preschool children use to structure their semantic memories and whether the use of taxonomies to structure memory becomes more common only after children enter elementary school. Subjects were 108 children in three equal groups of 18 boys and 18 girls children each of 4-, 5-,…
Bilingual Writing as an Act of Identity: Sign-Making in Multiple Scripts
ERIC Educational Resources Information Center
Kabuto, Bobbie
2010-01-01
This article explores early bilingual script writing as an act of identity. Using multiple theoretical perspectives related to social semiotics and social constructivist perspectives on identity and writing, the research presented in this article is based on a case study of an early biliterate learner of Japanese and English from the ages of 3-7.…
Autonomic correlates of physical and moral disgust.
Ottaviani, Cristina; Mancini, Francesco; Petrocchi, Nicola; Medea, Barbara; Couyoumdjian, Alessandro
2013-07-01
Given that the hypothesis of a common origin of physical and moral disgust has received sparse empirical support, this study aimed to shed light on the subjective and autonomic signatures of these two facets of the same emotional response. Participants (20 men, 20 women) were randomly assigned to physical or moral disgust induction by the use of audio scripts while their electrocardiogram was continuously recorded. Affect ratings were obtained before and after the induction. Time and frequency domain heart rate variability (HRV) measures were obtained. After controlling for disgust sensitivity (DS-R) and obsessive-compulsive (OCI-R) tendencies, both scripts elicited disgust but whereas the physical script elicited a feeling of dirtiness, the moral script evoked more indignation and contempt. The disgust-induced subjective responses were associated with opposite patterns of autonomic reactivity: enhanced activity of the parasympathetic nervous system without concurrent changes in heart rate (HR) for physical disgust and decreased vagal tone and increased HR and autonomic imbalance for moral disgust. Results suggest that immorality relies on the same biological root of physical disgust only in subjects with obsessive compulsive tendencies. Disgust appears to be a heterogeneous response that varies based on the individuals' contamination-based appraisal. Copyright © 2013 Elsevier B.V. All rights reserved.
Userscripts for the life sciences.
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-12-21
The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.
Userscripts for the Life Sciences
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-01-01
Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664
Photorefractive optical fuzzy-logic processor based on grating degeneracy
NASA Astrophysics Data System (ADS)
Wu, Weishu; Yang, Changxi; Campbell, Scott; Yeh, Pochi
1995-04-01
A novel optical fuzzy-logic processor using light-induced gratings in photorefractive crystals is proposed and demonstrated. By exploiting grating degeneracy, one can easily implement parallel fuzzy-logic functions in disjunctive normal form.
Design and development of a web-based application for diabetes patient data management.
Deo, S S; Deobagkar, D N; Deobagkar, Deepti D
2005-01-01
A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
Energy consumption estimation of an OMAP-based Android operating system
NASA Astrophysics Data System (ADS)
González, Gabriel; Juárez, Eduardo; Castro, Juan José; Sanz, César
2011-05-01
System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal. The poor operational time of multimedia terminals makes computationally demanding applications impractical in real scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours. The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS) while the latter, a DSP core based on the C64x+, has architecture optimized for video processing. The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz and 430 MHz, respectively. This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6 (Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have been employed. A test-bench to profile the OS resource usage has been developed. As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer has been used. The model is basically divided in two energy components. The former, the baseline core energy, describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes the energy consumed by the active modules depending on resource usage.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
RGG: A general GUI Framework for R scripts
Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert
2009-01-01
Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356
Simplifying and enhancing the use of PyMOL with horizontal scripts
2016-01-01
Abstract Scripts are used in PyMOL to exert precise control over the appearance of the output and to ease remaking similar images at a later time. We developed horizontal scripts to ease script development. A horizontal script makes a complete scene in PyMOL like a traditional vertical script. The commands in a horizontal script are separated by semicolons. These scripts are edited interactively on the command line with no need for an external text editor. This simpler workflow accelerates script development. In using PyMOL, the illustration of a molecular scene requires an 18‐element matrix of view port settings. The default format spans several lines and is laborious to manually reformat for one line. This default format prevents the fast assembly of horizontal scripts that can reproduce a molecular scene. We solved this problem by writing a function that displays the settings on one line in a compact format suitable for horizontal scripts. We also demonstrate the mapping of aliases to horizontal scripts. Many aliases can be defined in a single script file, which can be useful for applying costume molecular representations to any structure. We also redefined horizontal scripts as Python functions to enable the use of the help function to print documentation about an alias to the command history window. We discuss how these methods of using horizontal scripts both simplify and enhance the use of PyMOL in research and education. PMID:27488983
A Trade Study of Two Membrane-Aerated Biological Water Processors
NASA Technical Reports Server (NTRS)
Allada, Ram; Lange, Kevin; Vega. Leticia; Roberts, Michael S.; Jackson, Andrew; Anderson, Molly; Pickering, Karen
2011-01-01
Biologically based systems are under evaluation as primary water processors for next generation life support systems due to their low power requirements and their inherent regenerative nature. This paper will summarize the results of two recent studies involving membrane aerated biological water processors and present results of a trade study comparing the two systems with regards to waste stream composition, nutrient loading and system design. Results of optimal configurations will be presented.
Digital Beamforming Scatterometer
NASA Technical Reports Server (NTRS)
Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul
2009-01-01
This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate feeds for vertical and horizontal polarization. System upgrades to dual polarization are currently under way. The DBSAR processor is a reconfigurable data acquisition and processor system capable of real-time, high-speed data processing. DBSAR uses an FPGA-based architecture to implement digitally down-conversion, in-phase and quadrature (I/Q) demodulation, and subsequent radar specific algorithms. The core of the processor board consists of an analog-to-digital (AID) section, three Altera Stratix field programmable gate arrays (FPGAs), an ARM microcontroller, several memory devices, and an Ethernet interface. The processor also interfaces with a navigation board consisting of a GPS and a MEMS gyro. The processor has been configured to operate in scatterometer, Synthetic Aperture Radar (SAR), and altimeter modes. All the modes are based on digital beamforming which is a digital process that generates the far-field beam patterns at various scan angles from voltages sampled in the antenna array. This technique allows steering the received beam and controlling its beam-width and side-lobe. Several beamforming techniques can be implemented each characterized by unique strengths and weaknesses, and each applicable to different measurement scenarios. In Scatterometer mode, the radar is capable to.generate a wide beam or scan a narrow beam on transmit, and to steer the received beam on processing while controlling its beamwidth and side-lobe level. Table I lists some important radar characteristics
Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin
2018-01-01
The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format. PMID:29534464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Youngjoo; Kim, Keeman.
1991-01-01
An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digitalmore » I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs.« less
Digital system for structural dynamics simulation
NASA Technical Reports Server (NTRS)
Krauter, A. I.; Lagace, L. J.; Wojnar, M. K.; Glor, C.
1982-01-01
State-of-the-art digital hardware and software for the simulation of complex structural dynamic interactions, such as those which occur in rotating structures (engine systems). System were incorporated in a designed to use an array of processors in which the computation for each physical subelement or functional subsystem would be assigned to a single specific processor in the simulator. These node processors are microprogrammed bit-slice microcomputers which function autonomously and can communicate with each other and a central control minicomputer over parallel digital lines. Inter-processor nearest neighbor communications busses pass the constants which represent physical constraints and boundary conditions. The node processors are connected to the six nearest neighbor node processors to simulate the actual physical interface of real substructures. Computer generated finite element mesh and force models can be developed with the aid of the central control minicomputer. The control computer also oversees the animation of a graphics display system, disk-based mass storage along with the individual processing elements.
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
A GaAs vector processor based on parallel RISC microprocessors
NASA Astrophysics Data System (ADS)
Misko, Tim A.; Rasset, Terry L.
A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.
PCI-based WILDFIRE reconfigurable computing engines
NASA Astrophysics Data System (ADS)
Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.
1996-10-01
WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.
Understanding Life : The Evolutionary Dynamics of Complexity and Semiosis
NASA Astrophysics Data System (ADS)
Loeckenhoff, Helmut K.
2010-11-01
Post-Renaissance sciences created different cultures. To establish an epistemological base, Physics were separated from the Mental domain. Consciousness was excluded from science. Life Sciences were left in between e.g. LaMettrie's `man—machine' (1748) and 'vitalism' [e.g. Bergson 4]. Causative thinking versus intuitive arguing limited strictly comprehensive concepts. First ethology established a potential shared base for science, proclaiming the `biology paradigm' in the middle of the 20th century. Initially procured by Cybernetics and Systems sciences, `constructivist' models prepared a new view on human perception and thus also of scientific `objectivity when introducing the `observer'. In sequel Computer sciences triggered the ICT revolution. In turn ICT helped to develop Chaos and Complexity sciences, Non-linear Mathematics and its spin-offs in the formal sciences [Spencer-Brown 49] as e.g. (proto-)logics. Models of life systems, as e.g. Anticipatory Systems, integrated epistemology with mathematics and Anticipatory Computing [Dubois 11, 12, 13, 14] connecting them with Semiotics. Seminal ideas laid in the turn of the 19th to the 20th century [J. v. Uexküll 53] detected the co-action and co-evolvement of environments and life systems. Bio-Semiotics ascribed purpose, intent and meaning as essential qualities of life. The concepts of Systems Biology and Qualitative Research enriched and develop also anthropologies and humanities. Brain research added models of (higher) consciousness. An avant-garde is contemplating a science including consciousness as one additional base. New insights from the extended qualitative approach led to re-conciliation of basic assumptions of scientific inquiry, creating the `epistemological turn'. Paradigmatically, resting on macro- micro- and recently on nano-biology, evolution biology sired fresh scripts of evolution [W. Wieser 60,61]. Its results tie to hypotheses describing the emergence of language, of the human mind and of culture [e.g. R. Logan 34]. The different but related approaches are yet but loosely connected. Recent efforts search for a shared foundation e.g. in a set of Transdisciplinary base models [Loeckenhoff 30, 31]. The domain of pure mental constructions as ideologies/religions and spiritual phenomena will be implied.
Theater in Physics Teacher Education
NASA Astrophysics Data System (ADS)
van den Berg, Ed
2009-09-01
Ten years ago I sat down with the first batch of students in our science/math teacher education program in the Philippines, then third-year students, and asked them what they could do for the opening of the new science building. One of them pulled a stack of papers out of his bag and put it in front of me: a complete script for a science play! This was beyond expectation. The play was practiced several times for groups of high school students visiting the science exhibition that was also organized by the students. During the opening of our building, the play was performed for visiting dignitaries including the Assistant Secretary for Education, Culture, and Sports. It was a great success! The cast got invited to present their production at a number of places and occasions.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... technologies, namely safety-critical processor-based signal or train control systems, including subsystems and... or train control system (including a subsystem or component thereof) that was in service as of June 6... processor-based signal or train control system, subsystem, or component.'' See 49 CFR 236.903. Under Subpart...
Optical chirp z-transform processor with a simplified architecture.
Ngo, Nam Quoc
2014-12-29
Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.
Sexual scripts among young heterosexually active men and women: continuity and change.
Masters, N Tatiana; Casey, Erin; Wells, Elizabeth A; Morrison, Diane M
2013-01-01
Whereas gendered sexual scripts are hegemonic at the cultural level, research suggests they may be less so at dyadic and individual levels. Understanding "disjunctures" between sexual scripts at different levels holds promise for illuminating mechanisms through which sexual scripts can change. Through interviews with 44 heterosexually active men and women aged 18 to 25, the ways young people grappled with culture-level scripts for sexuality and relationships were delineated. Findings suggest that, although most participants' culture-level gender scripts for behavior in sexual relationships were congruent with descriptions of traditional masculine and feminine sexuality, there was heterogeneity in how or whether these scripts were incorporated into individual relationships. Specifically, three styles of working with sexual scripts were found: conforming, in which personal gender scripts for sexual behavior overlapped with traditional scripts; exception-finding, in which interviewees accepted culture-level gender scripts as a reality, but created exceptions to gender rules for themselves; and transforming, in which participants either attempted to remake culture-level gender scripts or interpreted their own nontraditional styles as equally normative. Changing sexual scripts can potentially contribute to decreased gender inequity in the sexual realm and to increased opportunities for sexual satisfaction, safety, and well-being, particularly for women, but for men as well.
Scripts or Components? A Comparative Study of Basic Emotion Knowledge in Roma and Non-Roma Children
ERIC Educational Resources Information Center
Giménez-Dasí, Marta; Quintanilla, Laura; Lucas-Molina, Beatriz
2018-01-01
The basic aspects of emotional comprehension seem to be acquired around the age of 5. However, it is not clear whether children's emotion knowledge is based on facial expression, organized in scripts, or determined by sociocultural context. This study aims to shed some light on these subjects by assessing knowledge of basic emotions in 4- and…
The Development of Videos in Culturally Grounded Drug Prevention for Rural Native Hawaiian Youth
ERIC Educational Resources Information Center
Okamoto, Scott K.; Helm, Susana; McClain, Latoya L.; Dinson, Ay-Laina
2012-01-01
The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the…
ERIC Educational Resources Information Center
Hanley, Mary Stone
This paper is an analysis of a project that involved African American middle school students in a drama program that was based on their lives and the stories of their community. Students were trained in performance skills, participated in the development of a script, and then performed the script in local schools. The 10 student participants, 5…
ERIC Educational Resources Information Center
Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.
2015-01-01
Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in…
ERIC Educational Resources Information Center
Sng, Cheong Ying; Carter, Mark; Stephenson, Jennifer
2017-01-01
Scripts in written or auditory form have been used to teach conversational skills to individuals with autism spectrum disorder (ASD), but with the proliferation of handheld tablet devices the scope to combine these 2 formats has broadened. The aim of this pilot study was to investigate if a script-based intervention, presented on an iPad…
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
Multitask neurovision processor with extensive feedback and feedforward connections
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1991-11-01
A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.
Sparks Will Fly: engineering creative script conflicts
NASA Astrophysics Data System (ADS)
Veale, Tony; Valitutti, Alessandro
2017-10-01
Scripts are often dismissed as the stuff of good movies and bad politics. They codify cultural experience so rigidly that they remove our freedom of choice and become the very antithesis of creativity. Yet, mental scripts have an important role to play in our understanding of creative behaviour, since a deliberate departure from an established script can produce results that are simultaneously novel and familiar, especially when others stick to the conventional script. Indeed, creative opportunities often arise at the overlapping boundaries of two scripts that antagonistically compete to mentally organise the same situation. This work explores the computational integration of competing scripts to generate creative friction in short texts that are surprising but meaningful. Our exploration considers conventional macro-scripts - ordered sequences of actions - and the less obvious micro-scripts that operate at even the lowest levels of language. For the former, we generate plots that squeeze two scripts into a single mini-narrative; for the latter, we generate ironic descriptions that use conflicting scripts to highlight the speaker's pragmatic insincerity. We show experimentally that verbal irony requires both kinds of scripts - macro and micro - to work together to reliably generate creative sparks from a speaker's subversive intent.
Lubarsky, Stuart; Dory, Valérie; Audétat, Marie-Claude; Custers, Eugène; Charlin, Bernard
2015-01-01
Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals' interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called 'illness scripts' that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.
Understanding and Using the Fermi Science Tools
NASA Astrophysics Data System (ADS)
Asercion, Joseph
2018-01-01
The Fermi Science Support Center (FSSC) provides information, documentation, and tools for the analysis of Fermi science data, including both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). Source and binary versions of the Fermi Science Tools can be downloaded from the FSSC website, and are supported on multiple platforms. An overview document, the Cicerone, provides details of the Fermi mission, the science instruments and their response functions, the science data preparation and analysis process, and interpretation of the results. Analysis Threads and a reference manual available on the FSSC website provide the user with step-by-step instructions for many different types of data analysis: point source analysis - generating maps, spectra, and light curves, pulsar timing analysis, source identification, and the use of python for scripting customized analysis chains. We present an overview of the structure of the Fermi science tools and documentation, and how to acquire them. We also provide examples of standard analyses, including tips and tricks for improving Fermi science analysis.
2014-01-01
A specially designed sensor processor used as a main processor in IoT (internet-of-thing) device for the rare-event sensing applications is proposed. The IoT device including the proposed sensor processor performs the event-driven sensor data processing based on an accuracy-energy configurable event-quantization in architectural level. The received sensor signal is converted into a sequence of atomic events, which is extracted by the signal-to-atomic-event generator (AEG). Using an event signal processing unit (EPU) as an accelerator, the extracted atomic events are analyzed to build the final event. Instead of the sampled raw data transmission via internet, the proposed method delays the communication with a host system until a semantic pattern of the signal is identified as a final event. The proposed processor is implemented on a single chip, which is tightly coupled in bus connection level with a microcontroller using a 0.18 μm CMOS embedded-flash process. For experimental results, we evaluated the proposed sensor processor by using an IR- (infrared radio-) based signal reflection and sensor signal acquisition system. We successfully demonstrated that the expected power consumption is in the range of 20% to 50% compared to the result of the basement in case of allowing 10% accuracy error. PMID:25580458
Enabling Remote and Automated Operations at The Red Buttes Observatory
NASA Astrophysics Data System (ADS)
Ellis, Tyler G.; Jang-Condell, Hannah; Kasper, David; Yeigh, Rex R.
2016-01-01
The Red Buttes Observatory (RBO) is a 60 centimeter Cassegrain telescope located ten miles south of Laramie, Wyoming. The size and proximity of the telescope comfortably make the site ideal for remote and automated observations. This task required development of confidence in control systems for the dome, telescope, and camera. Python and WinSCP script routines were created for the management of science images and weather. These scripts control the observatory via the ASCOM standard libraries and allow autonomous operation after initiation.The automation tasks were completed primarily to rejuvenate an aging and underutilized observatory with hopes to contribute to an international exoplanet hunting team with other interests in potentially hazardous asteroid detection. RBO is owned and operated solely by the University of Wyoming. The updates and proprietor status have encouraged the development of an undergraduate astronomical methods course including hands-on experience with a research telescope, a rarity in bachelor programs for astrophysics.
NASA Astrophysics Data System (ADS)
Hosier, Julie Winchester
Integration of subjects is something elementary teachers must do to insure required objectives are covered. Science-based Reader's Theatre is one way to weave reading into science. This study examined the roles of frequency, attitudes, and Multiple Intelligence modalities surrounding Electricity Content-Based Reader's Theatre. This study used quasi-experimental, repeated measures ANOVA with time as a factor design. A convenience sample of two fifth-grade classrooms participated in the study for eighteen weeks. Five Electricity Achievement Tests were given throughout the study to assess students' growth. A Student Reader's Theatre Attitudinal Survey revealed students' attitudes before and after Electricity Content-Based Reader's Theatre treatment. The Multiple Intelligence Inventory for Kids (Faris, 2007) examined whether Multiple Intelligence modality played a role in achievement on Electricity Test 4, the post-treatment test. Analysis using repeated measures ANOVA and an independent t-test found that students in the experimental group, which practiced its student-created Electricity Content-Based Reader's Theatre skits ten times versus two times for the for control group, did significantly better on Electricity Achievement Test 4, t(76) = 3.018, p = 0.003. Dependent t-tests did not find statistically significant differences between students' attitudes about Electricity Content-Based Reader's Theatre before and after treatment. A Kruskal-Wallis test found no statistically significant difference between the various Multiple Intelligence modalities score mean ranks (x2 = 5.57, df = 2, alpha = .062). Qualitative data do, however, indicate students had strong positive feelings about Electricity Content-Based Reader's Theatre after treatment. Students indicated it to be motivating, confidence-building, and a fun way to learn about science; however, they disliked writing their own scripts. Examining the frequency, attitudes, and Multiple Intelligence modalities lead to the conclusion that the role of frequency had the greatest impact on the success of Electricity Content-Based Reader's Theatre. The participating teachers, students, and research found integrating science and reading through Electricity Content-Based Reader's Theatre beneficial.
PyMOOSE: Interoperable Scripting in Python for MOOSE
Ray, Subhasis; Bhalla, Upinder S.
2008-01-01
Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924
Writers Identification Based on Multiple Windows Features Mining
NASA Astrophysics Data System (ADS)
Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila
2016-03-01
Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.
NASA Astrophysics Data System (ADS)
Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc
2017-01-01
Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.
Chen, Yiping; Fu, Shimin; Iversen, Susan D; Smith, Steve M; Matthews, Paul M
2002-10-01
Chinese offers a unique tool for testing the effects of word form on language processing during reading. The processes of letter-mediated grapheme-to-phoneme translation and phonemic assembly (assembled phonology) critical for reading and spelling in any alphabetic orthography are largely absent when reading nonalphabetic Chinese characters. In contrast, script-to-sound translation based on the script as a whole (addressed phonology) is absent when reading the Chinese alphabetic sound symbols known as pinyin, for which the script-to-sound translation is based exclusively on assembled phonology. The present study aims to contrast patterns of brain activity associated with the different cognitive mechanisms needed for reading the two scripts. fMRI was used with a block design involving a phonological and lexical task in which subjects were asked to decide whether visually presented, paired Chinese characters or pinyin "sounded like" a word. Results demonstrate that reading Chinese characters and pinyin activate a common brain network including the inferior frontal, middle, and inferior temporal gyri, the inferior and superior parietal lobules, and the extrastriate areas. However, some regions show relatively greater activation for either pinyin or Chinese reading. Reading pinyin led to a greater activation in the inferior parietal cortex bilaterally, the precuneus, and the anterior middle temporal gyrus. In contrast, activation in the left fusiform gyrus, the bilateral cuneus, the posterior middle temporal, the right inferior frontal gyrus, and the bilateral superior frontal gyrus were greater for nonalphabetic Chinese reading. We conclude that both alphabetic and nonalphabetic scripts activate a common brain network for reading. Overall, there are no differences in terms of hemispheric specialization between alphabetic and nonalphabetic scripts. However, differences in language surface form appear to determine relative activation in other regions. Some of these regions (e.g., the inferior parietal cortex for pinyin and fusiform gyrus for Chinese characters) are candidate regions for specialized processes associated with reading via predominantly assembled (pinyin) or addressed (Chinese character) procedures.
Not letting the perfect be the enemy of the good: steps toward science-ready ALMA images
NASA Astrophysics Data System (ADS)
Kepley, Amanda A.; Donovan Meyer, Jennifer; Brogan, Crystal; Moullet, Arielle; Hibbard, John; Indebetouw, Remy; Mason, Brian
2016-07-01
Historically, radio observatories have placed the onus of calibrating and imaging data on the observer, thus restricting their user base to those already initiated into the mysteries of radio data or those willing to develop these skills. To expand its user base, the Atacama Large Millimeter/submillimeter Array (ALMA) has a high- level directive to calibrate users' data and, ultimately, to deliver scientifically usable images or cubes to principle investigators (PIs). Although an ALMA calibration pipeline is in place, all delivered images continue to be produced for the PI by hand. In this talk, I will describe on-going efforts at the Northern American ALMA Science Center to produce more uniform imaging products that more closely meet the PI science goals and provide better archival value. As a first step, the NAASC imaging group produced a simple imaging template designed to help scientific staff produce uniform imaging products. This script allowed the NAASC to maximize the productivity of data analysts with relatively little guidance by the scientific staff by providing a step-by-step guide to best practices for ALMA imaging. Finally, I will describe the role of the manually produced images in verifying the imaging pipeline and the on-going development of said pipeline. The development of the imaging template, while technically simple, shows how small steps toward unifying processes and sharing knowledge can lead to large gains for science data products.
Networked Workstations and Parallel Processing Utilizing Functional Languages
1993-03-01
program . This frees the programmer to concentrate on what the program is to do, not how the program is...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially pro- grammed, single processor approach to problem...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially programmed , single processor approach to
Fast particles identification in programmable form at level-0 trigger by means of the 3D-Flow system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crosetto, Dario B.
1998-10-30
The 3D-Flow Processor system is a new, technology-independent concept in very fast, real-time system architectures. Based on either an FPGA or an ASIC implementation, it can address, in a fully programmable manner, applications where commercially available processors would fail because of throughput requirements. Possible applications include filtering-algorithms (pattern recognition) from the input of multiple sensors, as well as moving any input validated by these filtering-algorithms to a single output channel. Both operations can easily be implemented on a 3D-Flow system to achieve a real-time processing system with a very short lag time. This system can be built either with off-the-shelfmore » FPGAs or, for higher data rates, with CMOS chips containing 4 to 16 processors each. The basic building block of the system, a 3D-Flow processor, has been successfully designed in VHDL code written in ''Generic HDL'' (mostly made of reusable blocks that are synthesizable in different technologies, or FPGAs), to produce a netlist for a four-processor ASIC featuring 0.35 micron CBA (Ceil Base Array) technology at 3.3 Volts, 884 mW power dissipation at 60 MHz and 63.75 mm sq. die size. The same VHDL code has been targeted to three FPGA manufacturers (Altera EPF10K250A, ORCA-Lucent Technologies 0R3T165 and Xilinx XCV1000). A complete set of software tools, the 3D-Flow System Manager, equally applicable to ASIC or FPGA implementations, has been produced to provide full system simulation, application development, real-time monitoring, and run-time fault recovery. Today's technology can accommodate 16 processors per chip in a medium size die, at a cost per processor of less than $5 based on the current silicon die/size technology cost.« less
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
A low power biomedical signal processor ASIC based on hardware software codesign.
Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T
2009-01-01
A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.
Multibus-based parallel processor for simulation
NASA Technical Reports Server (NTRS)
Ogrady, E. P.; Wang, C.-H.
1983-01-01
A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos
2011-01-01
General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.
The role of scripts in personal consistency and individual differences.
Demorest, Amy; Popovska, Ana; Dabova, Milena
2012-02-01
This article examines the role of scripts in personal consistency and individual differences. Scripts are personally distinctive rules for understanding emotionally significant experiences. In 2 studies, scripts were identified from autobiographical memories of college students (Ns = 47 and 50) using standard categories of events and emotions to derive event-emotion compounds (e.g., Affiliation-Joy). In Study 1, scripts predicted responses to a reaction-time task 1 month later, such that participants responded more quickly to the event from their script when asked to indicate what emotion would be evoked by a series of events. In Study 2, individual differences in 5 common scripts were found to be systematically related to individual differences in traits of the Five-Factor Model. Distinct patterns of correlation revealed the importance of studying events and emotions in compound units, that is, in script form (e.g., Agreeableness was correlated with the script Affiliation-Joy but not with the scripts Fun-Joy or Affiliation-Love). © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.
Yanaoka, Kaichi
2016-02-01
This research examined the effects of planning and executive functions on young children's (ages 3-to 5-years) strategies in changing scripts. Young children (N = 77) performed a script task (doll task), three executive function tasks (DCCS, red/blue task, and nine box task), a planning task, and a receptive vocabulary task. In the doll task, young children first enacted a "changing clothes" script, and then faced a situation in which some elements of the script were inappropriate. They needed to enact a script by compensating inappropriate items for the other-script items or by changing to the other script in advance. The results showed that shifting, a factor of executive function, had a positive influence on whether young children could compensate inappropriate items. In addition, planning was also an important factor that helped children to change to the other script in advance. These findings suggest that shifting and planning play different roles in using the two strategies appropriately when young children enact scripts in unexpected situations.
Cheng, Li-Fang; Chen, Tung-Chien; Chen, Liang-Gee
2012-01-01
Most of the abnormal cardiac events such as myocardial ischemia, acute myocardial infarction (AMI) and fatal arrhythmia can be diagnosed through continuous electrocardiogram (ECG) analysis. According to recent clinical research, early detection and alarming of such cardiac events can reduce the time delay to the hospital, and the clinical outcomes of these individuals can be greatly improved. Therefore, it would be helpful if there is a long-term ECG monitoring system with the ability to identify abnormal cardiac events and provide realtime warning for the users. The combination of the wireless body area sensor network (BASN) and the on-sensor ECG processor is a possible solution for this application. In this paper, we aim to design and implement a digital signal processor that is suitable for continuous ECG monitoring and alarming based on the continuous wavelet transform (CWT) through the proposed architectures--using both programmable RISC processor and application specific integrated circuits (ASIC) for performance optimization. According to the implementation results, the power consumption of the proposed processor integrated with an ASIC for CWT computation is only 79.4 mW. Compared with the single-RISC processor, about 91.6% of the power reduction is achieved.
First Results of an “Artificial Retina” Processor Prototype
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...
2016-11-15
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
First Results of an “Artificial Retina” Processor Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
Aerospace Applications Conference, Steamboat Springs, CO, Feb. 1-8, 1986, Digest
NASA Astrophysics Data System (ADS)
The present conference considers topics concerning the projected NASA Space Station's systems, digital signal and data processing applications, and space science and microwave applications. Attention is given to Space Station video and audio subsystems design, clock error, jitter, phase error and differential time-of-arrival in satellite communications, automation and robotics in space applications, target insertion into synthetic background scenes, and a novel scheme for the computation of the discrete Fourier transform on a systolic processor. Also discussed are a novel signal parameter measurement system employing digital signal processing, EEPROMS for spacecraft applications, a unique concurrent processor architecture for high speed simulation of dynamic systems, a dual polarization flat plate antenna, Fresnel diffraction, and ultralinear TWTs for high efficiency satellite communications.
ERIC Educational Resources Information Center
Department of Education, Washington, DC.
This booklet, written in Spanish, is intended to be used with a set of slides as part of a presentation to students on "How To Apply for Federal Student Aid" ("Como Solicitar la Asistencia Economica Federal para Estudiantes"). The first part of the book is a script based on the slides. After the script is a guide to hosting a financial aid…
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
2010-12-01
Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An
The 28-entity IGES test file results using ComputerVision CADDS 4X
NASA Technical Reports Server (NTRS)
Kuan, Anchyi; Shah, Saurin; Smith, Kevin
1987-01-01
The investigation was based on the following steps: (1) Read the 28 Entity IGES (Initial Graphics Exchange Specification) Test File into the CAD data base with the IGES post-processor; (2) Make the modifications to the displayed geometries, which should produce the normalized front view and the drawing entity defined display; (3) Produce the drawing entity defined display of the file as it appears in the CAD system after modification to the geometry; (4) Translate the file back to IGES format using IGES pre-processor; (5) Read the IGES file produced by the pre-processor back into the CAD data base; (6) Produce another drawing entity defined display of the CAD display; and (7) Compare the plots resulting from steps 3 and 6 - they should be identical to each other.
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
A VME-based software trigger system using UNIX processors
NASA Astrophysics Data System (ADS)
Atmur, Robert; Connor, David F.; Molzon, William
1997-02-01
We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.
Awan, Omer Abdulrehman; van Wagenberg, Frans; Daly, Mark; Safdar, Nabile; Nagy, Paul
2011-04-01
Many radiology information systems (RIS) cannot accept a final report from a dictation reporting system before the exam has been completed in the RIS by a technologist. A radiologist can still render a report in a reporting system once images are available, but the RIS and ancillary systems may not get the results because of the study's uncompleted status. This delay in completing the study caused an alarming number of delayed reports and was undetected by conventional RIS reporting techniques. We developed a Web-based reporting tool to monitor uncompleted exams and automatically page section supervisors when a report was being delayed by its incomplete status in the RIS. Institutional Review Board exemption was obtained. At four imaging centers, a Python script was developed to poll the dictation system every 10 min for exams in five different modalities that were signed by the radiologist but could not be sent to the RIS. This script logged the exams into an existing Web-based tracking tool using PHP and a MySQL database. The script also text-paged the modality supervisor. The script logged the time at which the report was finally sent, and statistics were aggregated onto a separate Web-based reporting tool. Over a 1-year period, the average number of uncompleted exams per month and time to problem resolution decreased at every imaging center and in almost every imaging modality. Automated feedback provides a vital link in improving technologist performance and patient care without assigning a human resource to manage report queues.
The science of computing - Parallel computation
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
Sen. Cantwell, Maria [D-WA
2010-12-07
Senate - 12/07/2010 Read twice and referred to the Committee on Commerce, Science, and Transportation. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
The Community Multiscale Air Quality (CMAQ) modeling system is a state-of-the science regional air quality modeling system. The CMAQ modeling system has been primarily developed by the U.S. Environmental Protection Agency, and it has been publically and freely available for more...
Documentation of 50% water conservation in a single process at a beef abattoir. Meat Science
USDA-ARS?s Scientific Manuscript database
Beef slaughter is water intensive due to stringent food safety requirements. We conducted a study at a commercial beef processor to demonstrate water conservation by modifying the mechanical head wash. We documented the initial nozzle configuration (112 nozzles), water pressure (275 kPa), and flowra...
The Mechanics of CSCL Macro Scripts
ERIC Educational Resources Information Center
Dillenbourg, Pierre; Hong, Fabrice
2008-01-01
Macro scripts structure collaborative learning and foster the emergence of knowledge-productive interactions such as argumentation, explanations and mutual regulation. We propose a pedagogical model for the designing of scripts and illustrate this model using three scripts. In brief, a script disturbs the natural convergence of a team and in doing…
Script Reforms--Are They Necessary?
ERIC Educational Resources Information Center
James, Gregory
Script reform, the modification of an existing writing system, is often confused with script replacement of one writing system with another. Turkish underwent the replacement of Arabic script by an adaptation of Roman script under Kamel Ataturk, but a similar replacement in Persian was rejected because of the high rate of existing literacy in…
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Page segmentation using script identification vectors: A first look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Cannon, M.; Kelly, P.
1997-07-01
Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less
Ahmed, Shabbir; Papadias, Dionissios D.; Lee, Sheldon H. D.; Ahluwalia, Rajesh K.
2013-01-08
The invention provides a fuel processor comprising a linear flow structure having an upstream portion and a downstream portion; a first catalyst supported at the upstream portion; and a second catalyst supported at the downstream portion, wherein the first catalyst is in fluid communication with the second catalyst. Also provided is a method for reforming fuel, the method comprising contacting the fuel to an oxidation catalyst so as to partially oxidize the fuel and generate heat; warming incoming fuel with the heat while simultaneously warming a reforming catalyst with the heat; and reacting the partially oxidized fuel with steam using the reforming catalyst.
Conversion of the Aeronautics Interactive Workstation
NASA Technical Reports Server (NTRS)
Riveras, Nykkita L.
2004-01-01
This summer I am working in the Educational Programs Office. My task is to convert the Aeronautics Interactive Workstation from a Macintosh (Mac) platform to a Personal Computer (PC) platform. The Aeronautics Interactive Workstation is a workstation in the Aerospace Educational Laboratory (AEL), which is one of the three components of the Science, Engineering, Mathematics, and Aerospace Academy (SEMAA). The AEL is a state-of-the-art, electronically enhanced, computerized classroom that puts cutting-edge technology at the fingertips of participating students. It provides a unique learning experience regarding aerospace technology that features activities equipped with aerospace hardware and software that model real-world challenges. The Aeronautics Interactive Workstation, in particular, offers a variety of activities pertaining to the history of aeronautics. When the Aeronautics Interactive Workstation was first implemented into the AEL it was designed with Macromedia Director 4 for a Mac. Today it is being converted to Macromedia DirectorMX2004 for a PC. Macromedia Director is the proven multimedia tool for building rich content and applications for CDs, DVDs, kiosks, and the Internet. It handles the widest variety of media and offers powerful features for building rich content that delivers red results, integrating interactive audio, video, bitmaps, vectors, text, fonts, and more. Macromedia Director currently offers two programmingkripting languages: Lingo, which is Director's own programmingkripting language and JavaScript. In the workstation, Lingo is used in the programming/scripting since it was the only language in use when the workstation was created. Since the workstation was created with an older version of Macromedia Director it hosted significantly different programming/scripting protocols. In order to successfully accomplish my task, the final product required correction of Xtra and programming/scripting errors. I also had to convert the Mac platform file extensions into compatible file extensions for a PC.
Bohn, Annette; Habermas, Tilmann
2016-01-01
This study examines predictions from two theories on the organisation of autobiographical memory: Cultural Life Script Theory which conceptualises the organisation of autobiographical memory by cultural schemata, and Transition Theory which proposes that people organise their memories in relation to personal events that changed the fabric of their daily lives, or in relation to negative collective public transitions, called the Living-in-History effect. Predictions from both theories were tested in forty-eight-old Germans from Berlin and Northern Germany. We tested whether the Living-in-History effect exists for both negative (the Second World War) and positive (Fall of Berlin Wall) collectively experienced events, and whether cultural life script events serve as a prominent strategy to date personal memories. Results showed a powerful, long-lasting Living-in History effect for the negative, but not the positive event. Berlin participants dated 26% of their memories in relation to the Second World War. Supporting cultural life script theory, life script events were frequently used to date personal memories. This provides evidence that people use a combination of culturally transmitted knowledge and knowledge based on personal experience to navigate through their autobiographical memories, and that experiencing war has a lasting impact on the organisation of autobiographical memories across the life span.
NASA Astrophysics Data System (ADS)
Yang, Mei; Jiao, Fengjun; Li, Shulian; Li, Hengqiang; Chen, Guangwen
2015-08-01
A self-sustained, complete and miniaturized methanol fuel processor has been developed based on modular integration and microreactor technology. The fuel processor is comprised of one methanol oxidative reformer, one methanol combustor and one two-stage CO preferential oxidation unit. Microchannel heat exchanger is employed to recover heat from hot stream, miniaturize system size and thus achieve high energy utilization efficiency. By optimized thermal management and proper operation parameter control, the fuel processor can start up in 10 min at room temperature without external heating. A self-sustained state is achieved with H2 production rate of 0.99 Nm3 h-1 and extremely low CO content below 25 ppm. This amount of H2 is sufficient to supply a 1 kWe proton exchange membrane fuel cell. The corresponding thermal efficiency of whole processor is higher than 86%. The size and weight of the assembled reactors integrated with microchannel heat exchangers are 1.4 L and 5.3 kg, respectively, demonstrating a very compact construction of the fuel processor.
A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.
1993-01-01
A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Trainable multiscript orientation detection
NASA Astrophysics Data System (ADS)
Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.
2010-01-01
Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.