A Simple, Scalable, Script-based Science Processor
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2004-01-01
The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054
Simple, Script-Based Science Processing Archive
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle
2007-01-01
The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.
Simple, Scalable, Script-Based Science Processor (S4P)
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)
2001-01-01
The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.
Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)
NASA Astrophysics Data System (ADS)
Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.
2005-12-01
The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.
Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A
2016-01-01
The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen
2009-01-01
Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007
NASA Technical Reports Server (NTRS)
Kempler, Steve; Alcott, Gary; Lynnes, Chris; Leptoukh, Greg; Vollmer, Bruce; Berrick, Steve
2008-01-01
NASA Earth Sciences Division (ESD) has made great investments in the development and maintenance of data management systems and information technologies, to maximize the use of NASA generated Earth science data. With information management system infrastructure in place, mature and operational, very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) and the reusability for these future missions. The GES DISC has developed a series of modular, reusable data management components currently in use. They include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. Information management system components are based on atmospheric scientist inputs. Large development and maintenance cost savings can be realized through their reuse in future missions.
ERIC Educational Resources Information Center
Demetriadis, Stavros; Egerter, Tina; Hanisch, Frank; Fischer, Frank
2011-01-01
This study investigates the effectiveness of using peer review in the context of scripted collaboration to foster both domain-specific and domain-general knowledge acquisition in the computer science domain. Using a one-factor design with a script and a control condition, students worked in small groups on a series of computer science problems…
Catch the A-Train from the NASA GIBS/Worldview Platform
NASA Astrophysics Data System (ADS)
Schmaltz, J. E.; Alarcon, C.; Baynes, K.; Boller, R. A.; Cechini, M. F.; De Cesare, C.; De Luca, A. P.; Gunnoe, T.; King, B. A.; King, J.; Pressley, N. N.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Wong, M. M.
2016-12-01
The satellites and instruments of the Afternoon Train are providing an unprecedented combination of nearly simultaneous measurements. One of the challenges for researchers and applications users is to sift through these combinations to find particular sets of data that correspond to their interests. Using visualization of the data is one way to explore these combinations. NASA's Worldview tool is designed to do just that - to interactively browse full-resolution satellite imagery. Worldview (https://worldview.earthdata.nasa.gov/) is web-based and developed using open libraries and standards (OpenLayers, JavaScript, CSS, HTML) for cross-platform compatibility. It addresses growing user demands for access to full-resolution imagery by providing a responsive, interactive interface with global coverage and no artificial boundaries. In addition to science data imagery, Worldview provides ancillary datasets such as coastlines and borders, socio-economic layers, and satellite orbit tracks. Worldview interacts with the Earthdata Search Client to provide download of the data files associated with the imagery being viewed. The imagery used by Worldview is provided NASA's Global Imagery Browse Services (GIBS - https://earthdata.nasa.gov/gibs) which provide highly responsive, highly scalable imagery services. Requests are made via the OGC Web Map Tile Service (WMTS) standard. In addition to Worldview, other clients can be developed using a variety of web-based libraries, desktop and mobile app libraries, and GDAL script-based access. GIBS currently includes more than 106 science data sets from seven instruments aboard three of the A-Train satellites and new data sets are being added as part of the President's Big Earth Data Initiative (BEDI). Efforts are underway to include new imagery types, such as vectors and curtains, into Worldview/GIBS which will be used to visualize additional A-Train science parameters.
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Technical Reports Server (NTRS)
Berrick, Stephen; Lynnes, Christopher
2007-01-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement
Everware toolkit. Supporting reproducible science and challenge-driven education.
NASA Astrophysics Data System (ADS)
Ustyuzhanin, A.; Head, T.; Babuschkin, I.; Tiunov, A.
2017-10-01
Modern science clearly demands for a higher level of reproducibility and collaboration. To make research fully reproducible one has to take care of several aspects: research protocol description, data access, environment preservation, workflow pipeline, and analysis script preservation. Version control systems like git help with the workflow and analysis scripts part. Virtualization techniques like Docker or Vagrant can help deal with environments. Jupyter notebooks are a powerful platform for conducting research in a collaborative manner. We present project Everware that seamlessly integrates git repository management systems such as Github or Gitlab, Docker and Jupyter helping with a) sharing results of real research and b) boosts education activities. With the help of Everware one can not only share the final artifacts of research but all the depth of the research process. This been shown to be extremely helpful during organization of several data analysis hackathons and machine learning schools. Using Everware participants could start from an existing solution instead of starting from scratch. They could start contributing immediately. Everware allows its users to make use of their own computational resources to run the workflows they are interested in, which leads to higher scalability of the toolkit.
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Astrophysics Data System (ADS)
Berrick, S. W.; Lynnes, C.
2007-12-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.
The Departmental Script as an Ongoing Conversation into the Phronesis of Teaching Science as Inquiry
NASA Astrophysics Data System (ADS)
Melville, Wayne; Campbell, Todd; Fazio, Xavier; Bartley, Anthony
2012-12-01
This article investigates the extent to which a science department script supports the teaching and learning of science as inquiry and how this script is translated into individual teachers' classrooms. This study was completed at one school in Canada which, since 2000, has developed a departmental script supportive of teaching and learning of science as inquiry. Through a mixed-method strategy, multiple data sources were drawn together to inform a cohesive narrative about scripts, science departments, and individual classrooms. Results of the study reveal three important findings: (1) the departmental script is not an artefact, but instead is an ongoing conversation into the episteme, techne and phronesis of science teaching; (2) the consistently reformed teaching practices that were observed lead us to believe that a departmental script has the capacity to enhance the teaching of science as inquiry; and, (3) the existence of a departmental script does not mean that teaching will be `standardized' in the bureaucratic sense of the word. Our findings indicate that a departmental script can be considered to concurrently operate as an epistemic script that is translated consistently across the classes, and a social script that was more open to interpretation within individual teachers' classrooms.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
Embedding Fonts in MetaPost Output
2016-04-19
by John Hobby ) based on Donald Knuth’s META- FONT [4] with high quality PostScript output. An outstanding feature of MetaPost is that typeset fonts in...output, the graphics are perfectly scalable to any arbitrary res- olution. John Hobby , its author, writes: “[MetaPost] is really a programming lan- guage...for generating graphics, especially fig- ures for TEX [5] and troff documents.” This quote by Hobby indicates that MetaPost figures are not only
SPV: a JavaScript Signaling Pathway Visualizer.
Calderone, Alberto; Cesareni, Gianni
2018-03-24
The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
Hennrikus, Eileen F; Skolka, Michael P; Hennrikus, Nicholas
2018-01-01
Medical school curriculum continues to search for methods to develop a conceptual educational framework that promotes the storage, retrieval, transfer, and application of basic science to the human experience. To achieve this goal, we propose a metacognitive approach that integrates basic science with the humanistic and health system aspects of medical education. During the week, via problem-based learning and lectures, first-year medical students were taught the basic science underlying a disease. Each Friday, a patient with the disease spoke to the class. Students then wrote illness scripts, which required them to metacognitively reflect not only on disease pathophysiology, complications, and treatments but also on the humanistic and health system issues revealed during the patient encounter. Evaluation of the intervention was conducted by measuring results on course exams and national board exams and analyzing free responses on the illness scripts and student course feedback. The course exams and National Board of Medical Examiners questions were divided into 3 categories: content covered in lecture, problem-based learning, or patient + illness script. Comparisons were made using Student t -test. Free responses were inductively analyzed using grounded theory methodology. This curricular intervention was implemented during the first 13-week basic science course of medical school. The main objective of the course, Scientific Principles of Medicine, is to lay the scientific foundation for subsequent organ system courses. A total of 150 students were enrolled each year. We evaluated this intervention over 2 years, totaling 300 students. Students scored significantly higher on illness script content compared to lecture content on the course exams (mean difference = 11.1, P = .006) and national board exams given in December (mean difference = 21.8, P = .0002) and June (mean difference = 12.7, P = .016). Themes extracted from students' free responses included the following: relevance of basic science, humanistic themes of empathy, resilience, and the doctor-patient relationship, and systems themes of cost, barriers to care, and support systems. A metacognitive approach to learning through the use of patient encounters and illness script reflections creates stronger conceptual frameworks for students to integrate, store, retain, and retrieve knowledge.
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Teacher Scripts in Science Teaching
ERIC Educational Resources Information Center
Monteiro, Rute; Carrillo, Jose; Aguaded, Santiago
2010-01-01
Awareness of teacher scripts is of crucial importance to reflection on practice, and represents one means of widening the scope of classroom performance. The first part of this work provides a full description of three scripts employed by a novice science teacher within the topic of The "Structure of Flowers", and offers a detailed illustration…
Scripted and Unscripted Science Lessons for Children with Autism and Intellectual Disability.
Knight, Victoria F; Collins, Belva; Spriggs, Amy D; Sartini, Emily; MacDonald, Margaret Janey
2018-02-27
Both scripted lessons and unscripted task analyzed lessons have been used effectively to teach science content to students with intellectual disability and autism spectrum disorder. This study evaluated the efficacy, efficiency, and teacher preference of scripted and unscripted task analyzed lesson plans from an elementary science curriculum designed for students with intellectual disability and autism spectrum disorder by evaluating both lesson formats for (a) student outcomes on a science comprehension assessment, (b) sessions to criterion, and (c) average duration of lessons. Findings propose both lesson types were equally effective, but unscripted task analyzed versions may be more efficient and were preferred by teachers to scripted lessons. Implications, limitations, and suggestions for future research are also discussed.
2012-11-27
with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components
NASA Astrophysics Data System (ADS)
Midland, Susan
Media specialists are increasingly assuming professional development roles as they collaborate with teachers to design instruction that combines content with technology. I am a media specialist in an independent school, and collaborated with two science teachers over a three-year period to integrate technology with their instruction. This action study explored integration of a digital narrative project in three eighth-grade earth science units and one ninth-grade physics unit with each unit serving as a cycle of research. Students produced short digital documentaries that combined still images with an accompanying narration. Students participating in the project wrote scripts based on selected science topics. The completed scripts served as the basis for the narratives. These projects were compared with a more traditional science writing project. Barriers and facilitators for implementation of this type of media project in a science classroom were identified. Lack of adequate access to computers proved to be a significant mechanical barrier. Acquisition of a laptop cart reduced but did not eliminate the technology access issues. The complexity of the project increased implementation time in comparison with traditional alternatives. Evaluation of the completed media projects presented problems. Scores by outside evaluators reflected evaluator unfamiliarity with assessing multimedia projects rather than student performance. Despite several revisions of the assessment rubric, low inter-rater reliability remained a concern even in the last cycle. This suggests that evaluation of media could present issues for teachers who attempt projects of this kind. A writing frame was developed to facilitate production of scripts. This reduced the time required to produce the scripts, but produced writing that was formulaic in the teacher's estimate. A graphic organizer was adopted in the final cycle to address this concern. New insights emerged as the study progressed through the four cycles of the study. At the conclusion of the study, the two teachers and I had a better understanding of barriers that can prevent smooth integration of a technology-based project.
Deng, Yue; Zenil, Hector; Tegnér, Jesper; Kiani, Narsis A
2017-12-15
The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes. The Matlab code of the HiDi implementation is available at: www.complexitycalculator.com/HiDiScript.zip. hzenilc@gmail.com or narsis.kiani@ki.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Capobianco, Brenda M.; DeLisi, Jacqueline; Radloff, Jeffrey
2018-01-01
In an effort to document teachers' enactments of new reform in science teaching, valid and scalable measures of science teaching using engineering design are needed. This study describes the development and testing of an approach for documenting and characterizing elementary science teachers' multiday enactments of engineering design-based science…
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities.
Loman, Nicholas J; Pallen, Mark J
2010-06-21
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
On-demand server-side image processing for web-based DICOM image display
NASA Astrophysics Data System (ADS)
Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo
2000-04-01
Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.
Representing Science through Historical Drama: "Lord Kelvin and the Age of the Earth Debate"
ERIC Educational Resources Information Center
Begoray, Deborah L.; Stinner, Arthur
2005-01-01
This paper presents a defense for the use of historical scripted conversations in science. We discuss drama's use of both expository and narrative text forms to expand the language forms available for a variety of learners, the use of scripted conversations as a defensible curriculum design to foster learning in general and science in particular,…
Accurate Arabic Script Language/Dialect Classification
2014-01-01
Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification
Enacting the Common Script: Management Ideas at Finnish Universities of Applied Sciences
ERIC Educational Resources Information Center
Vuori, Johanna
2015-01-01
This article discusses the work of mid-level management at Finnish universities of applied sciences. Based on in-depth interviews with 15 line managers, this study investigates how the standardized management ideas of rational management and employee empowerment are used in the leadership of lecturers at these institutions. The findings indicate…
COMP Superscalar, an interoperable programming framework
NASA Astrophysics Data System (ADS)
Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul
2015-12-01
COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.
NASA Astrophysics Data System (ADS)
Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.
2014-12-01
We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.
Monterrosa, Eva C; Frongillo, Edward A; González de Cossío, Teresa; Bonvecchio, Anabelle; Villanueva, Maria Angeles; Thrasher, James F; Rivera, Juan A
2013-06-01
Scalable interventions are needed to improve infant and young child feeding (IYCF). We evaluated whether an IYCF nutrition communication strategy using radio and nurses changed beliefs, attitudes, social norms, intentions, and behaviors related to breastfeeding (BF), dietary diversity, and food consistency. Women with children 6-24 mo were randomly selected from 6 semi-urban, low-income communities in the Mexican state of Morelos (intervention, n = 266) and from 3 comparable communities in Puebla (control, n = 201). Nurses delivered only once 5 scripted messages: BF, food consistency, flesh-food and vegetable consumption, and feed again if food was rejected; these same messages aired 7 times each day on 3 radio stations for 21 d. The control communities were not exposed to scripted messages via nurse and radio. We used a pre-/post-test design to evaluate changes in beliefs, attitudes, norms, and intentions as well as change in behavior with 7-d food frequency questions. Mixed models were used to examine intervention-control differences in pre-/post changes. Coverage was 87% for the nurse component and 34% for radio. Beliefs, attitudes, and intention, but not social norms, about IYCF significantly improved in the intervention communities compared with control. Significant pre-/post changes in the intervention communities compared with control were reported for BF frequency (3.7 ± 0.6 times/d), and consumption of vegetables (0.6 ± 0.2 d) and beef (0.2 ± 0.1 d) and thicker consistency of chicken (0.6 ± 0.2 d) and vegetable broths (0.8 ± 0.4 d). This study provides evidence that a targeted communication strategy using a scalable model significantly improves IYCF.
[Application of the life sciences platform based on oracle to biomedical informations].
Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao
2008-03-01
The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.
Dynamic alarm response procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, J.; Gordon, P.; Fitch, K.
2006-07-01
The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphicsmore » (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)« less
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
Socializing Respect and Knowledge in a Racially Integrated Science Classroom
ERIC Educational Resources Information Center
Solis, Jorge; Kattan, Shlomy; Baquedano-Lopez, Patricia
2009-01-01
In this article we examine the socialization of respect in a racially integrated science classroom in Northern California that employed a character education program called Tribes. We focus on the ways scripts derived from this program are enacted during Community Circle activities and how breaches to these scripts and the norms of respectful…
Effective self-regulated science learning through multimedia-enriched skeleton concept maps
NASA Astrophysics Data System (ADS)
Marée, Ton J.; van Bruggen, Jan M.; Jochems, Wim M. G.
2013-04-01
Background: This study combines work on concept mapping with scripted collaborative learning. Purpose: The objective was to examine the effects of self-regulated science learning through scripting students' argumentative interactions during collaborative 'multimedia-enriched skeleton concept mapping' on meaningful science learning and retention. Programme description: Each concept in the enriched skeleton concept map (ESCoM) contained annotated multimedia-rich content (pictures, text, animations or video clips) that elaborated the concept, and an embedded collaboration script to guide students' interactions. Sample: The study was performed in a Biomolecules course on the Bachelor of Applied Science program in the Netherlands. All first-year students (N=93, 31 women, 62 men, aged 17-33 years) took part in this study. Design and methods: The design used a control group who received the regular course and an experimental group working together in dyads on an ESCoM under the guidance of collaboration scripts. In order to investigate meaningful understanding and retention, a retention test was administered a month after the final exam. Results: Analysis of covariance demonstrated a significant experimental effect on the Biomolecules exam scores between the experimental group and the control, and the difference between the groups on the retention test also reached statistical significance. Conclusions: Scripted collaborative multimedia ESCoM mapping resulted in meaningful understanding and retention of the conceptual structure of the domain, the concepts, and their relations. Not only was scripted collaborative multimedia ESCoM mapping more effective than the traditional teaching approach, it was also more efficient in requiring far less teacher guidance.
The NOAO NEWFIRM Data Handling System
NASA Astrophysics Data System (ADS)
Zárate, N.; Fitzpatrick, M.
2008-08-01
The NOAO Extremely Wide-Field IR Mosaic (NEWFIRM) is a new 1-2.4 micron IR camera that is now being commissioned for the 4m Mayall telescope at Kitt Peak. The focal plane consists of a 2x2 mosaic of 2048x2048 arrays offerring a field-of-view of 27.6' on a side. The use of dual MONSOON array controllers permits very fast readout, a scripting interface allows for highly efficient observing modes. We describe the Data Handling System (DHS) for the NEWFIRM camera which is designed to meet the performance requirements of the instrument as well as the observing environment in which in operates. It is responsible for receiving the data stream from the detector and instrument software, rectifying the image geometry, presenting a real-time display of the image to the user, final assembly of a science-grade image with complete headers, as well as triggering automated pipeline and archival functions. The DHS uses an event-based messaging system to control multiple processes on a distributed network of machines. The asynchronous nature of this processing means the DHS operates independently from the camera readout and the design of the system is inherently scalable to larger focal planes that use a greater number of array controllers. Current status and future plans for the DHS are also discussed.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities
2010-01-01
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/ PMID:20565938
ERIC Educational Resources Information Center
Lee, Yuan-Hsuan
2018-01-01
Premised on Web 2.0 technology, the current study investigated the effect of facilitating critical thinking using the Collaborative Questioning, Reading, Answering, and Checking (C-QRAC) collaboration script on university students' science reading literacy in flipped learning conditions. Participants were 85 Taiwanese university students recruited…
Information Security Considerations for Applications Using Apache Accumulo
2014-09-01
Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java
NASA Astrophysics Data System (ADS)
Alcott, G.; Kempler, S.; Lynnes, C.; Leptoukh, G.; Vollmer, B.; Berrick, S.
2008-12-01
NASA Earth Sciences Division (ESD), and its preceding Earth science organizations, has made great investments in the development and maintenance of data management systems, as well as information technologies, for the purpose of maximizing the use and usefulness of NASA generated Earth science data. Earth science information systems, evolving with the maturation and implementation of advancing technologies, reside at NASA data centers, known as Distributed Active Archive Centers (DAACs). With information management system infrastructure in place, and system data and user services already developed and operational, only very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) (one of NASAs DAACs) and their potential reuse for these future missions. After 14 years working with instrument teams and the broader science community, GES DISC personnel expertise in atmospheric, water cycle, and atmospheric modeling data and information services, as well as Earth science missions, information system engineering, operations, and user services have developed a series of modular, reusable data management components currently is use in several projects. The knowledge and experience gained at the GES DISC lend themselves to providing science driven information systems in the areas of aerosols, clouds, and atmospheric chemicals to be measured by recommended Decadal Survey missions. Available reusable capabilities include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. In addition, recent enhancements, such as Open Geospatial Consortium (OGC), Inc. interoperability implementations and data fusion prototypes, will be described. As a result of the information management systems developed by NASAs GES DISC, not only are large cost savings realized through system reuse, but maintenance costs are also minimized due to the simplicity of their implementations.
NASA Astrophysics Data System (ADS)
Peleg, R.; Baram-Tsabari, A.
2016-10-01
Science museums often introduce plays to liven up exhibits, attract visitors to specific exhibitions, and help visitors to "digest" difficult content. Most previous research has concentrated on viewers' learning outcomes. This study uses performance and spectator analyses from the field of theater studies to explore the link between producers' intended aims, the written script, and the learning outcomes. We also use the conflict of didactics and aesthetics, common to the design of both educational plays and science museum exhibits, as a lens for understanding our data. "Darwin's journey," a play about evolution, was produced by a major science museum in Israel. The producers' objectives were collected through in-depth interviews. A structural analysis was conducted on the script. Viewer ( n = 103) and nonviewer ( n = 90) data were collected via a questionnaire. The results show strong evidence for the encoding of all of the producers' aims in the script. Explicit and cognitive aims were decoded as intended by the viewers. The evidence was weak for the decoding of implicit and affective aims. While the producers were concerned with the conflict of didactics and aesthetics, this conflict was not apparent in the script. The conflict is discussed within the broader context of science education in informal settings.
NASA Astrophysics Data System (ADS)
Adler, David S.; Workman, William M., III; Chance, Don
2004-09-01
The Science and Mission Scheduling Branch (SMSB) of the Space Telescope Science Institute (STScI) historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SMSB recently transitioned to Unix operations. No additional resources were available to the group; the project was SMSB's to design, develop, and implement. Early decisions included the choice of Python as the primary scripting language; adoption of Object-Oriented Design in the development of base utilities; and the development of a Python utility to interact directly with the Sybase database. The project was completed in January 2004 with the implementation of a GUI to generate the Command Loads that are uplinked to HST. The current tool suite consists of 31 utilities and 271 tools comprising over 60,000 lines of code. In this paper, we summarize the decision-making process used to determine the primary scripting language, database interface, and code management library. We also describe the finished product and summarize lessons learned along the way to completing the project.
Environmental Epidemiology Program
accessible with JavaScript activated. Utah Department of Health Bureau of Epidemiology Environmental Epidemiology Program (EEP) The Environmental Epidemiology Program strives to improve the health of Utah residents through science-based environmental health policy and by empowering citizens with knowledge about
Windsor, Richard; Clark, Jeannie; Cleary, Sean; Davis, Amanda; Thorn, Stephanie; Abroms, Lorien; Wedeles, John
2014-01-01
This study evaluated the effectiveness of the Smoking Cessation and Reduction in Pregnancy Treatment (SCRIPT) Program selected by the West Virginia-Right From The Start Project for state-wide dissemination. A process evaluation documented the fidelity of SCRIPT delivery by Designated Care Coordinators (DCC), licensed nurses and social workers who provide home-based case management to Medicaid-eligible clients in all 55 counties. We implemented a quasi-experimental, non-randomized, matched Comparison (C) Group design. The SCRIPT Experimental E Group (N = 259) were all clients in 2009-2010 that wanted to quit, provided a screening carbon monoxide (CO), and received a SCRIPT home visit. The (C) Group was derived from all clients in 2006-2007 who had the same CO assessments as E Group clients and reported receiving cessation counseling. We stratified the baseline CO of E Group clients into 10 strata, and randomly selected the same number of (C) Group clients (N = 259) from each matched strata to evaluate the effectiveness of the SCRIPT Program. There were no significant baseline differences in the E and (C) Group. A Process Evaluation documented a significant increase in the fidelity of DCC delivery of SCRIPT Program procedures: from 63 % in 2006 to 74 % in 2010. Significant increases were documented in the E Group cessation rate (+9.3 %) and significant reduction rate (+4.5 %), a ≥50 % reduction from a baseline CO. Perinatal health case management staff can deliver the SCRIPT Program, and Medicaid-supported clients can change smoking behavior, even very late in pregnancy. When multiple biases were analyzed, we concluded the SCRIPT Dissemination Project was the most plausible reason for the significant changes in behavior.
Collaborative workbench for cyberinfrastructure to accelerate science algorithm development
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.
2013-12-01
There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.
The Transition from VMS to Unix Operations for STScI's Science Planning and Scheduling Team
NASA Astrophysics Data System (ADS)
Adler, D. S.; Taylor, D. K.
The Science Planning and Scheduling Team of the Space Telescope Science Institute currently uses the VMS operating system. SPST began a transition to Unix-based operations in the summer of 1999. The main tasks for SPST to address in the Unix transition are: (1) converting the current SPST operational tools from DCL to Python; (2) converting our database report scripts from SQL; (3) adopting a Unix-based code management system; and (4) training the SPST staff. The goal is to fully transition the team to Unix operations by the end of 2001.
Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis
2014-10-01
test events, quality of service and other key metrics of military systems and networks are evaluated. Network data captured in standard flow formats...mentioned here. The Ozone Widget Framework (Next Century, n.d.) has proven to be very useful. Also, an extensive, clean, and optimized JavaScript ...library for visualizing many types of data can be found in D3–Data Driven Documents (Bostock, 2013). Quality of Service from Flow Two essential metrics of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Cecilia C.; Theoretische Physik, Universitaet des Saarlandes, D-66041 Saarbruecken; Departament de Fisica, Universitat Autonoma de Barcelona, E-08193 Bellaterra
2010-06-15
We present in a unified manner the existing methods for scalable partial quantum process tomography. We focus on two main approaches: the one presented in Bendersky et al. [Phys. Rev. Lett. 100, 190403 (2008)] and the ones described, respectively, in Emerson et al. [Science 317, 1893 (2007)] and Lopez et al. [Phys. Rev. A 79, 042328 (2009)], which can be combined together. The methods share an essential feature: They are based on the idea that the tomography of a quantum map can be efficiently performed by studying certain properties of a twirling of such a map. From this perspective, inmore » this paper we present extensions, improvements, and comparative analyses of the scalable methods for partial quantum process tomography. We also clarify the significance of the extracted information, and we introduce interesting and useful properties of the {chi}-matrix representation of quantum maps that can be used to establish a clearer path toward achieving full tomography of quantum processes in a scalable way.« less
NASA Astrophysics Data System (ADS)
Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael
2013-04-01
The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.
Earth Science Curriculum Enrichment Through Matlab!
NASA Astrophysics Data System (ADS)
Salmun, H.; Buonaiuto, F. S.
2016-12-01
The use of Matlab in Earth Science undergraduate courses in the Department of Geography at Hunter College began as a pilot project in Fall 2008 and has evolved and advanced to being a significant component of an Advanced Oceanography course, the selected tool for data analysis in other courses and the main focus of a graduate course for doctoral students at The city University of New York (CUNY) working on research related to geophysical, oceanic and atmospheric dynamics. The primary objectives of these efforts were to enhance the Earth Science curriculum through course specific applications, to increase undergraduate programming and data analysis skills, and to develop a Matlab users network within the Department and the broader Hunter College and CUNY community. Students have had the opportunity to learn Matlab as a stand-alone course, within an independent study group, or as a laboratory component within related STEM classes. All of these instructional efforts incorporated the use of prepackaged Matlab exercises and a research project. Initial exercises were designed to cover basic scripting and data visualization techniques. Students were provided data and a skeleton script to modify and improve upon based on the laboratory instructions. As student's programming skills increased throughout the semester more advanced scripting, data mining and data analysis were assigned. In order to illustrate the range of applications within the Earth Sciences, laboratory exercises were constructed around topics selected from the disciplines of Geology, Physics, Oceanography, Meteorology and Climatology. In addition the structure of the research component of the courses included both individual and team projects.
Writing for Learning in Science: Producing a Video Script on Light.
ERIC Educational Resources Information Center
Lorenzo, Mercedes; Hand, Brian; Prain, Vaughan
2001-01-01
Reports on a task in which students wrote scripts for a silent movie to consolidate their understanding of the subject of light. Considers the broader implications for effective task design, implementation, and review of this kind of writing. (Author/ASK)
NASA Astrophysics Data System (ADS)
Hellman, Leslie G.
This qualitative study uses children's writing to explore the divide between a conception of Science as a humanistic discipline reliant on creativity, ingenuity and out of the box thinking and a persistent public perception of science and scientists as rigid and methodical. Artifacts reviewed were 506 scripts written during 2014 and 2016 by 5th graders participating in an out-of classroom, mentor supported, free-choice 10-week arts and literacy initiative. 47% (237) of these scripts were found to contain content relating to Science, Scientists, Science Education and the Nature of Science. These 237 scripts were coded for themes; characteristics of named scientist characters were tracked and analyzed. Findings included NOS understandings being expressed by representation of Science and Engineering Practices; Ingenuity being primarily linked to Engineering tasks; common portrayals of science as magical or scientists as villains; and a persistence in negative stereotypes of scientists, including a lack of gender equity amongst the named scientist characters. Findings suggest that representations of scientists in popular culture highly influence the portrayals of scientists constructed by the students. Recommendations to teachers include encouraging explicit consideration of big-picture NOS concepts such as ethics during elementary school and encouraging the replacement of documentary or educational shows with more engaging fictional media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar
2014-01-01
In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less
Primary pre-service teachers' skills in planning a guided scientific inquiry
NASA Astrophysics Data System (ADS)
García-Carmona, Antonio; Criado, Ana M.; Cruz-Guzmán, Marta
2017-10-01
A study is presented of the skills that primary pre-service teachers (PPTs) have in completing the planning of a scientific inquiry on the basis of a guiding script. The sample comprised 66 PPTs who constituted a group-class of the subject Science Teaching, taught in the second year of an undergraduate degree in primary education at a Spanish university. The data was acquired from the responses of the PPTs (working in teams) to open-ended questions posed to them in the script concerning the various tasks involved in a scientific inquiry (formulation of hypotheses, design of the experiment, data collection, interpretation of results, drawing conclusions). Data were analyzed within the framework of a descriptive-interpretive qualitative research study with a combination of inter- and intra-rater methods, and the use of low-inference descriptors. The results showed that the PPTs have major shortcomings in planning the complete development of a guided scientific inquiry. The discussion of the results includes a number of implications for rethinking the Science Teaching course so that PPTs can attain a basic level of training in inquiry-based science education.
Development of Web-Based Examination System Using Open Source Programming Model
ERIC Educational Resources Information Center
Abass, Olalere A.; Olajide, Samuel A.; Samuel, Babafemi O.
2017-01-01
The traditional method of assessment (examination) is often characterized by examination questions leakages, human errors during marking of scripts and recording of scores. The technological advancement in the field of computer science has necessitated the need for computer usage in majorly all areas of human life and endeavors, education sector…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Adam
2007-05-22
MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
The Latent Structure of Secure Base Script Knowledge
ERIC Educational Resources Information Center
Waters, Theodore E. A.; Fraley, R. Chris; Groh, Ashley M.; Steele, Ryan D.; Vaughn, Brian E.; Bost, Kelly K.; Veríssimo, Manuela; Coppola, Gabrielle; Roisman, Glenn I.
2015-01-01
There is increasing evidence that attachment representations abstracted from childhood experiences with primary caregivers are organized as a cognitive script describing secure base use and support (i.e., the "secure base script"). To date, however, the latent structure of secure base script knowledge has gone unexamined--this despite…
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
Astronomy and Disabled: Implementation of new technologies to communicate science to new audiences
NASA Astrophysics Data System (ADS)
García, Beatriz; Ortiz Gil, Amelia; Proust, Dominique
2015-08-01
Commission 46 proposed in 2012 the creation of an interdisciplinary WG in which astronomers work together with technicians, educators and disability specialists to develop new teaching and learning strategies devoted o generate resources of high impact among disabled populations, which are usually away from astronomy. Successful initiatives designed to research the best-practices in using new technologies to communicate science in these special audiences include the creation of models and applications, and the implementation of a data base of didactic approaches and tools. Between the achievements of this proposal, we have original development in: design of electronics, design of original software, scripts and music for Planetarium functions, design of models and their associated explanatory script, printed material in Braille and 3D, filming associated with sign language, interviews and docs recompilation and the recently project on the Sign Language Universal Encyclopedic Dictionary, based on the proposal by Proust (2009) and, which proposes the dissemination of a unique language for the deaf worldwide, associated with astronomical terms.We present, on behalf of the WG, some of the achievements, developments, successful stories of recent applications of this new approach to the science for all, thinking in the new “public of sciences”, and new challenges.
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
Decision Support Systems for Launch and Range Operations Using Jess
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar
2007-01-01
The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.
Charming Users into Scripting CIAO with Python
NASA Astrophysics Data System (ADS)
Burke, D. J.
2011-07-01
The Science Data Systems group of the Chandra X-ray Center provides a number of scripts and Python modules that extend the capabilities of CIAO. Experience in converting the existing scripts—written in a variety of languages such as bash, csh/tcsh, Perl and S-Lang—to Python, and conversations with users, led to the development of the ciao_contrib.runtool module. This allows users to easily run CIAO tools from Python scripts, and utilizes the metadata provided by the parameter-file system to create an API that provides the flexibility and safety guarantees of the command-line. The module is provided to the user community and is being used within our group to create new scripts.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
ERIC Educational Resources Information Center
Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios
2016-01-01
The results presented in this paper contribute to research on two different areas of teaching methods: distributed pair programming (DPP) and computer-supported collaborative learning (CSCL). An evaluation study of a DPP system that supports collaboration scripts was conducted over one semester of a computer science course. Seventy-four students…
Earth-Base: A Free And Open Source, RESTful Earth Sciences Platform
NASA Astrophysics Data System (ADS)
Kishor, P.; Heim, N. A.; Peters, S. E.; McClennen, M.
2012-12-01
This presentation describes the motivation, concept, and architecture behind Earth-Base, a web-based, RESTful data-management, analysis and visualization platform for earth sciences data. Traditionally web applications have been built directly accessing data from a database using a scripting language. While such applications are great at bring results to a wide audience, they are limited in scope to the imagination and capabilities of the application developer. Earth-Base decouples the data store from the web application by introducing an intermediate "data application" tier. The data application's job is to query the data store using self-documented, RESTful URIs, and send the results back formatted as JavaScript Object Notation (JSON). Decoupling the data store from the application allows virtually limitless flexibility in developing applications, both web-based for human consumption or programmatic for machine consumption. It also allows outside developers to use the data in their own applications, potentially creating applications that the original data creator and app developer may not have even thought of. Standardized specifications for URI-based querying and JSON-formatted results make querying and developing applications easy. URI-based querying also allows utilizing distributed datasets easily. Companion mechanisms for querying data snapshots aka time-travel, usage tracking and license management, and verification of semantic equivalence of data are also described. The latter promotes the "What You Expect Is What You Get" (WYEIWYG) principle that can aid in data citation and verification.
Kiesewetter, Jan; Kollar, Ingo; Fernandez, Nicolas; Lubarsky, Stuart; Kiessling, Claudia; Fischer, Martin R; Charlin, Bernard
2016-09-01
Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: "illness scripts" and "internal/external collaboration scripts". Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. "Internal collaboration scripts" refer to an individual's knowledge about how to interact with others in a social situation. "External collaboration scripts" are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with "external collaboration scripts". Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.
NASA Astrophysics Data System (ADS)
Niepold, F.; Byers, A.
2009-12-01
The scientific complexities of global climate change, with wide-ranging economic and social significance, create an intellectual challenge that mandates greater public understanding of climate change research and the concurrent ability to make informed decisions. The critical need for an engaged, science literate public has been repeatedly emphasized by multi-disciplinary entities like the Intergovernmental Panel on Climate Change (IPCC), the National Academies (Rising Above the Gathering Storm report), and the interagency group responsible for the recently updated Climate Literacy: The Essential Principles of Climate Science. There is a clear need for an American public that is climate literate and for K-12 teachers confident in teaching relevant science content. A key goal in the creation of a climate literate society is to enhance teachers’ knowledge of global climate change through a national, scalable, and sustainable professional development system, using compelling climate science data and resources to stimulate inquiry-based student interest in science, technology, engineering, and mathematics (STEM). This session will explore innovative e-learning technologies to address the limitations of one-time, face-to-face workshops, thereby adding significant sustainability and scalability. The resources developed will help teachers sift through the vast volume of global climate change information and provide research-based, high-quality science content and pedagogical information to help teachers effectively teach their students about the complex issues surrounding global climate change. The Learning Center is NSTA's e-professional development portal to help the nations teachers and informal educators learn about the scientific complexities of global climate change through research-based techniques and is proven to significantly improve teacher science content knowledge.
ERIC Educational Resources Information Center
Monroy, Carlos; Rangel, Virginia Snodgrass; Whitaker, Reid
2014-01-01
In this paper, we discuss a scalable approach for integrating learning analytics into an online K-12 science curriculum. A description of the curriculum and the underlying pedagogical framework is followed by a discussion of the challenges to be tackled as part of this integration. We include examples of data visualization based on teacher usage…
NASA Astrophysics Data System (ADS)
Jo, Sunhwan; Jiang, Wei
2015-12-01
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.
A Browser-Based Multi-User Working Environment for Physicists
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.
2014-06-01
Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
ERIC Educational Resources Information Center
Gijlers, Hannie; Weinberger, Armin; van Dijk, Alieke Mattia; Bollen, Lars; van Joolingen, Wouter
2013-01-01
Creating shared representations can foster knowledge acquisition by elementary school students by promoting active integration and translation of new information. In this study, we investigate to what extent awareness support and scripting facilitate knowledge construction and discourse quality of elementary school students (n?=?94) in a…
"The Best App Is the Teacher" Introducing Classroom Scripts in Technology-Enhanced Education
ERIC Educational Resources Information Center
Montrieux, H.; Raes, A.; Schellens, T.
2017-01-01
A quasi-experimental study was set up in secondary education to study the role of teachers while implementing tablet devices in science education. Three different classroom scripts that guided students and teachers' actions during the intervention on two social planes (group and classroom level) are compared. The main goal was to investigate which…
Citizen science provides a reliable and scalable tool to track disease-carrying mosquitoes.
Palmer, John R B; Oltra, Aitana; Collantes, Francisco; Delgado, Juan Antonio; Lucientes, Javier; Delacour, Sarah; Bengoa, Mikel; Eritja, Roger; Bartumeus, Frederic
2017-10-24
Recent outbreaks of Zika, chikungunya and dengue highlight the importance of better understanding the spread of disease-carrying mosquitoes across multiple spatio-temporal scales. Traditional surveillance tools are limited by jurisdictional boundaries and cost constraints. Here we show how a scalable citizen science system can solve this problem by combining citizen scientists' observations with expert validation and correcting for sampling effort. Our system provides accurate early warning information about the Asian tiger mosquito (Aedes albopictus) invasion in Spain, well beyond that available from traditional methods, and vital for public health services. It also provides estimates of tiger mosquito risk comparable to those from traditional methods but more directly related to the human-mosquito encounters that are relevant for epidemiological modelling and scalable enough to cover the entire country. These results illustrate how powerful public participation in science can be and suggest citizen science is positioned to revolutionize mosquito-borne disease surveillance worldwide.
Waters, Theodore E A; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S
2015-08-01
Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present 2 studies and provide 3 critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from 2 western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. (c) 2015 APA, all rights reserved).
Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.
2015-01-01
Recent work examining the content and organization of attachment representations suggests that one way in which we represent the attachment relationship is in the form of a cognitive script. That said, this work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present two studies and provide three critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from two western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. PMID:26147774
Towards Scalable Entangled Photon Sources with Self-Assembled InAs /GaAs Quantum Dots
NASA Astrophysics Data System (ADS)
Wang, Jianping; Gong, Ming; Guo, G.-C.; He, Lixin
2015-08-01
The biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for realizing deterministic entangled photon-pair sources, which are essential to quantum information science. The entangled photon pairs have recently been generated in experiments after eliminating the fine-structure splitting (FSS) of excitons using a number of different methods. Thus far, however, QD-based sources of entangled photons have not been scalable because the wavelengths of QDs differ from dot to dot. Here, we propose a wavelength-tunable entangled photon emitter mounted on a three-dimensional stressor, in which the FSS and exciton energy can be tuned independently, thereby enabling photon entanglement between dissimilar QDs. We confirm these results via atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications.
Developing cloud applications using the e-Science Central platform.
Hiden, Hugo; Woodman, Simon; Watson, Paul; Cala, Jacek
2013-01-28
This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.
Developing cloud applications using the e-Science Central platform
Hiden, Hugo; Woodman, Simon; Watson, Paul; Cala, Jacek
2013-01-01
This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction. PMID:23230161
NASA Astrophysics Data System (ADS)
Larour, Eric; Cheng, Daniel; Perez, Gilberto; Quinn, Justin; Morlighem, Mathieu; Duong, Bao; Nguyen, Lan; Petrie, Kit; Harounian, Silva; Halkides, Daria; Hayes, Wayne
2017-12-01
Earth system models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre- and post-process results (i.e., MATLAB, Python). This results in models that are hard to use for non-specialists and are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs by developing a new JavaScript application program interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows cryosphere scientists to run ISSM on the client side of a web page within the JavaScript environment. When combined with a web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) website (http://vesl.jpl.nasa.gov, VESL(2017)).
Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility
NASA Astrophysics Data System (ADS)
Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro
2014-06-01
In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.
Waters, Theodore E. A.; Ruiz, Sarah K.; Roisman, Glenn I.
2016-01-01
Increasing evidence suggests that attachment representations take at least two forms—a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample, and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Further, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. PMID:27302650
Vaughn, Brian E.; Waters, Theodore E. A.; Steele, Ryan D.; Roisman, Glenn I.; Bost, Kelly K.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn
2016-01-01
Although attachment theory claims that early attachment representations reflecting the quality of the child’s “lived experiences” are maintained across developmental transitions, evidence that has emerged over the last decade suggests that the association between early relationship quality and adolescents’ attachment representations is fairly modest in magnitude. We used aspects of parenting beyond sensitivity over childhood and adolescence and early security to predict adolescents’ scripted attachment representations. At age 18 years, 673 participants from the NICHD Study of Early Child Care and Youth Development (SECCYD) completed the Attachment Script Assessment (ASA) from which we derived an assessment of secure base script knowledge. Measures of secure base support from childhood through age 15 years (e.g., parental monitoring of child activity, father presence in the home) were selected as predictors and accounted for an additional 8% of the variance in secure base script knowledge scores above and beyond direct observations of sensitivity and early attachment status alone, suggesting that adolescents’ scripted attachment representations reflect multiple domains of parenting. Cognitive and demographic variables also significantly increased predicted variance in secure base script knowledge by 2% each. PMID:27032953
A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.
2017-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
Nanomanufacturing-related programs at NSF
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2015-08-01
The National Science Foundation is meeting the challenge of transitioning lab-scale nanoscience and technology to commercial-scale through several nanomanufacturing-related research programs. The goal of the core Nanomanufacturing (NM) and the inter-disciplinary Scalable Nanomanufacturing (SNM) programs is to meet the barriers to manufacturability at the nano-scale by developing the fundamental principles for the manufacture of nanomaterials, nanostructures, nanodevices, and engineered nanosystems. These programs address issues such as scalability, reliability, quality, performance, yield, metrics, and cost, among others. The NM and SNM programs seek nano-scale manufacturing ideas that are transformative, that will be widely applicable and that will have far-reaching technological and societal impacts. It is envisioned that the results from these basic research programs will provide the knowledge base for larger programs such as the manufacturing Nanotechnology Science and Engineering Centers (NSECs) and the Nanosystems Engineering Research Centers (NERCs). Besides brief descriptions of these different programs, this paper will include discussions on novel
TriG: Next Generation Scalable Spaceborne GNSS Receiver
NASA Technical Reports Server (NTRS)
Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.
2012-01-01
TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.
Echelle Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Clayton, Martin
This document is the first version of the Starlink Echelle Data Reduction Cookbook. It contains scripts and procedures developed by regular or heavy users of the existing software packages. These scripts are generally of two types; templates which readers may be able to modify to suit their particular needs and utilities which carry out a particular common task and can probably be used `off-the-shelf'. In the nature of this subject the recipes given are quite strongly tied to the software packages, rather than being science-data led. The major part of this document is divided into two sections dealing with scripts to be used with IRAF and with Starlink software (SUN/1).
Automating tasks in protein structure determination with the clipper python module.
McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M; Hoh, Soon Wen; Jenkins, Huw T; Dodson, Eleanor; Cowtan, Kevin; Agirre, Jon
2018-01-01
Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine-independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo-microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko
2012-12-22
Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.
Optimizing R with SparkR on a commodity cluster for biomedical research.
Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan
2016-12-01
Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Examining Classroom Interactions Related to Difference in Students' Science Achievement.
ERIC Educational Resources Information Center
Zady, Madelon F.; Portes, Pedro R.; Ochs, V. Dan
2003-01-01
Examines the cognitive supports that underlie achievement in science using a cultural historical framework and the activity setting (AS) construct with five features: personnel, motivation, scripts, task demands, and beliefs. Reports four emergent phenomena--science activities, the building of learning, meaning in lessons, and the conflict over…
Fermilab Science Education Office
on the Education Server about Science Education, but turn on JavaScript to enable all this site's - About - FAQ - Fermilab Friends - Fermilab Home Fermilab Office of Education & Public Outreach @fnal.gov Lederman Science Education Center Fermilab MS 777 Box 500 Batavia, IL 60510 (630) 840-8258 * fax
Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2012-01-01
A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.
Spontaneous Emergence of Legibility in Writing Systems: The Case of Orientation Anisotropy.
Morin, Olivier
2018-03-01
Cultural forms are constrained by cognitive biases, and writing is thought to have evolved to fit basic visual preferences, but little is known about the history and mechanisms of that evolution. Cognitive constraints have been documented for the topology of script features, but not for their orientation. Orientation anisotropy in human vision, as revealed by the oblique effect, suggests that cardinal (vertical and horizontal) orientations, being easier to process, should be overrepresented in letters. As this study of 116 scripts shows, the orientation of strokes inside written characters massively favors cardinal directions, and it is organized in such a way as to make letter recognition easier: Cardinal and oblique strokes tend not to mix, and mirror symmetry is anisotropic, favoring vertical over horizontal symmetry. Phylogenetic analyses and recently invented scripts show that cultural evolution over the last three millennia cannot be the sole cause of these effects. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
NASA Astrophysics Data System (ADS)
Huang, T.; Alarcon, C.; Quach, N. T.
2014-12-01
Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.
Waters, Theodore E A; Ruiz, Sarah K; Roisman, Glenn I
2017-01-01
Increasing evidence suggests that attachment representations take at least two forms: a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Furthermore, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
AgShare Open Knowledge: Improving Rural Communities through University Student Action Research
ERIC Educational Resources Information Center
Geith, Christine; Vignare, Karen
2013-01-01
The aim of AgShare is to create a scalable and sustainable collaboration of existing organizations for African publishing, localizing, and sharing of science-based teaching and learning materials that fill critical resource gaps in African MSc agriculture curriculum. Shared innovative practices are emerging through the AgShare projects, not only…
Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0
Huck, Kevin A.; Malony, Allen D.; Shende, Sameer; ...
2008-01-01
The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis ofmore » individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.« less
Scalability Issues for Remote Sensing Infrastructure: A Case Study.
Liu, Yang; Picard, Sean; Williamson, Carey
2017-04-29
For the past decade, a team of University of Calgary researchers has operated a large "sensor Web" to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system's memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.
Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-01-01
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121
Image-based environmental monitoring sensor application using an embedded wireless sensor network.
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-08-28
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.
PsyScript: a Macintosh application for scripting experiments.
Bates, Timothy C; D'Oliveiro, Lawrence
2003-11-01
PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.
Science Education at Fermilab Program Search
JavaScript is Turned Off or Not Supported in Your Browser. To search for programs go to the Non -Javascript Search or turn on Javascript and reload this page. Programs | Science Adventures | Calendar | Undergraduates Fermilab Ed Site Search Google Custom Search Programs: Introducing You to the World of Science
ERIC Educational Resources Information Center
Seiler, Gale; Abraham, Anjali
2009-01-01
Conscientization involves a recursive process of reflection and action toward individual and social transformation. Often this process takes shape through encounters in/with diverse and often conflicting discourses. The study of student and teacher discourses, or scripts and counterscripts, in science classrooms can reveal asymmetrical power…
Huth-Bocks, Alissa C.; Muzik, Maria; Beeghly, Marjorie; Earls, Lauren; Stacks, Ann M.
2015-01-01
There is growing evidence that ‘secure-base scripts’ (Waters & Waters, 2006) are an important part of the cognitive underpinnings of internal working models of attachment. Recent research in middle class samples has shown that secure-base scripts are linked to maternal attachment-oriented behavior and child outcomes. However, little is known about the correlates of secure base scripts in higher-risk samples. Participants in the current study included 115 mothers who were oversampled for childhood maltreatment and their infants. Results revealed that a higher level of secure base scriptedness was significantly related to more positive and less negative maternal parenting in both unstructured free play and structured teaching contexts, and to higher reflective functioning scores on the Parent Development Interview-Revised Short Form (Slade, Aber, Berger, Bresgi, & Kaplan, 2003). Associations with parent-child secure base scripts, specifically, indicate some level of relationship-specificity in attachment scripts. Many, but not all, significant associations remained after controlling for family income and maternal age. Findings suggest that assessing secure base scripts among mothers known to be at risk for parenting difficulties may be important for interventions aimed at altering problematic parental representations and caregiving behavior. PMID:25319230
Internal and External Scripts in Computer-Supported Collaborative Inquiry Learning
ERIC Educational Resources Information Center
Kollar, Ingo; Fischer, Frank; Slotta, James D.
2007-01-01
We investigated how differently structured external scripts interact with learners' internal scripts with respect to individual knowledge acquisition in a Web-based collaborative inquiry learning environment. Ninety students from two secondary schools participated. Two versions of an external collaboration script (high vs. low structured)…
Development of a web application for water resources based on open source software
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.
2014-01-01
This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.
NASA Technical Reports Server (NTRS)
Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.
2012-01-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.
[Medical practice, magic and religion - conjunction and development before and after Reformation].
Thorvardardottir, Olina Kjerulf
2017-12-01
The conjunction between medical practice, religion and magic becomes rather visible when one peers into old scripts and ancient literature. Before the foundation and diffusion of universities of the continent, the european convents and cloisters were the centers of medical knowl-edge and -practice for centuries. Alongside the scholarly development of medical science, driven from the roots of the eldest scholarly medicial practice, the practice of folk-medicin flourished and thrived all over Europe, not least the herbal-medicine which is the original form and foundation for modern pharmacy. This article deals with the conjunction of religion, magic and medical practice in ancient Icelandic sources such as the Old-Norse literature, medical-scripts from the 12th - 15th century Iceland, and not least the Icelandic magical-scripts (galdrakver) of the 17th century. The last mentioned documents were used as evidence in several witch-trials that led convicted witches to suffer executions at the stake once the wave of European witch-persecutions had rushed ashore in 17th century Iceland. These sources indicate a decline of medical knowledge and science in the 16th and 17th century Iceland, the medical practice being rather undeveloped at the time - in Iceland as in other parts of Europe - there-fore a rather unclear margin between "the learned and the laymen". While common people and folk-healers were convicted as witches to suffer at the stake for possession of magical scripts and healing-books, some scholars of the state of Danmark were practicing healing-methods that deserve to be compared to the activities of the former ones. That comparison raises an inevitable question of where to draw the line between the learned medical man and the magician of 17th century Iceland, that is between Magic and Science.
Gender differences in performance of script analysis by older adults.
Helmes, E; Bush, J D; Pike, D L; Drake, D G
2006-12-01
Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical scripts. Results were not significant but a tendency was present, with genders making fewer impossible errors on the gender-typical script. Such an interaction was also noted in Study 2, which contrasted 50 older with 50 younger adults on three scripts, including a script with neutral familiarity. The pattern of significant interactions for errors suggested the need to use scripts that are based upon tasks that are equally familiar to both genders.
JBrowse: a dynamic web platform for genome visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buels, Robert; Yao, Eric; Diesh, Colin M.
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a maturemore » web application suitable for genome visualization and analysis.« less
JBrowse: a dynamic web platform for genome visualization and analysis.
Buels, Robert; Yao, Eric; Diesh, Colin M; Hayes, Richard D; Munoz-Torres, Monica; Helt, Gregg; Goodstein, David M; Elsik, Christine G; Lewis, Suzanna E; Stein, Lincoln; Holmes, Ian H
2016-04-12
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a mature web application suitable for genome visualization and analysis.
Algorithmic psychometrics and the scalable subject.
Stark, Luke
2018-04-01
Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.
QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.
O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong
2016-05-04
Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.
phylo-node: A molecular phylogenetic toolkit using Node.js.
O'Halloran, Damien M
2017-01-01
Node.js is an open-source and cross-platform environment that provides a JavaScript codebase for back-end server-side applications. JavaScript has been used to develop very fast and user-friendly front-end tools for bioinformatic and phylogenetic analyses. However, no such toolkits are available using Node.js to conduct comprehensive molecular phylogenetic analysis. To address this problem, I have developed, phylo-node, which was developed using Node.js and provides a stable and scalable toolkit that allows the user to perform diverse molecular and phylogenetic tasks. phylo-node can execute the analysis and process the resulting outputs from a suite of software options that provides tools for read processing and genome alignment, sequence retrieval, multiple sequence alignment, primer design, evolutionary modeling, and phylogeny reconstruction. Furthermore, phylo-node enables the user to deploy server dependent applications, and also provides simple integration and interoperation with other Node modules and languages using Node inheritance patterns, and a customized piping module to support the production of diverse pipelines. phylo-node is open-source and freely available to all users without sign-up or login requirements. All source code and user guidelines are openly available at the GitHub repository: https://github.com/dohalloran/phylo-node.
The development of videos in culturally grounded drug prevention for rural native Hawaiian youth.
Okamoto, Scott K; Helm, Susana; McClain, Latoya L; Dinson, Ay-Laina
2012-12-01
The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the National Institute on Drug Abuse. Seventy-four middle- and high-school-aged youth in 15 focus groups adapted and validated the details of the scripts to make them more realistic. Specifically, youth participants affirmed the situations described in the scripts and suggested changes to details of the scripts to make them more culturally specific. Suggested changes to the scripts also reflected preferred drug resistance strategies described in prior research, and varied based on the type of drug offerer described in each script (i.e., peer/friend, parent, or cousin/sibling). Implications for culturally grounded drug prevention are discussed.
Trial-Based Functional Analysis Informs Treatment for Vocal Scripting.
Rispoli, Mandy; Brodhead, Matthew; Wolfe, Katie; Gregori, Emily
2018-05-01
Research on trial-based functional analysis has primarily focused on socially maintained challenging behaviors. However, procedural modifications may be necessary to clarify ambiguous assessment results. The purposes of this study were to evaluate the utility of iterative modifications to trial-based functional analysis on the identification of putative reinforcement and subsequent treatment for vocal scripting. For all participants, modifications to the trial-based functional analysis identified a primary function of automatic reinforcement. The structure of the trial-based format led to identification of social attention as an abolishing operation for vocal scripting. A noncontingent attention treatment was evaluated using withdrawal designs for each participant. This noncontingent attention treatment resulted in near zero levels of vocal scripting for all participants. Implications for research and practice are presented.
Arabic Script and the Rise of Arabic Calligraphy
ERIC Educational Resources Information Center
Alshahrani, Ali A.
2008-01-01
The aim of this paper is to present a concise coherent literature review of the Arabic Language script system as one of the oldest living Semitic languages in the world. The article discusses in depth firstly, Arabic script as a phonemic sound-based writing system of twenty eight, right to left cursive script where letterforms shaped by their…
Scripted Collaboration and Group-Based Variations in a Higher Education CSCL Context
ERIC Educational Resources Information Center
Hamalainen, Raija; Arvaja, Maarit
2009-01-01
Scripting student activities is one way to make Computer-Supported Collaborative Learning more efficient. This case study examines how scripting guided student group activities and also how different groups interpreted the script; what kinds of roles students adopted and what kinds of differences there were between the groups in terms of their…
Computer-Based Script Training for Aphasia: Emerging Themes from Post-Treatment Interviews
ERIC Educational Resources Information Center
Cherney, Leora R.; Halper, Anita S.; Kaye, Rosalind C.
2011-01-01
This study presents results of post-treatment interviews following computer-based script training for persons with chronic aphasia. Each of the 23 participants received 9 weeks of AphasiaScripts training. Post-treatment interviews were conducted with the person with aphasia and/or a significant other person. The 23 interviews yielded 584 coded…
Groskreutz, Mark P; Peters, Amy; Groskreutz, Nicole C; Higbee, Thomas S
2015-01-01
Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel script-frame protocol and learned to make play-related comments about toys. After the script-frame protocol, commenting occurred in the absence of scripts, with untrained play activities, and included untrained comments. © Society for the Experimental Analysis of Behavior.
Invisible Mars: New Visuals for Communicating MAVEN's Story
NASA Astrophysics Data System (ADS)
Shupla, C. B.; Ali, N. A.; Jones, A. P.; Mason, T.; Schneider, N. M.; Brain, D. A.; Blackwell, J.
2016-12-01
Invisible Mars tells the story of Mars' evolving atmosphere, through a script and a series of visuals as a live presentation. Created for Science-On-A-Sphere, the presentation has also been made available to planetariums, and is being expanded to other platforms. The script has been updated to include results from the Mars Atmosphere and Volatile Evolution Mission (MAVEN), and additional visuals have been produced. This poster will share the current Invisible Mars resources available and the plans to further disseminate this presentation.
Parallel and Scalable Clustering and Classification for Big Data in Geosciences
NASA Astrophysics Data System (ADS)
Riedel, M.
2015-12-01
Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.
Offline software for the DAMPE experiment
NASA Astrophysics Data System (ADS)
Wang, Chi; Liu, Dong; Wei, Yifeng; Zhang, Zhiyong; Zhang, Yunlong; Wang, Xiaolian; Xu, Zizong; Huang, Guangshun; Tykhonov, Andrii; Wu, Xin; Zang, Jingjing; Liu, Yang; Jiang, Wei; Wen, Sicheng; Wu, Jian; Chang, Jin
2017-10-01
A software system has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite-based experiment. The DAMPE software is mainly written in C++ and steered using a Python script. This article presents an overview of the DAMPE offline software, including the major architecture design and specific implementation for simulation, calibration and reconstruction. The whole system has been successfully applied to DAMPE data analysis. Some results obtained using the system, from simulation and beam test experiments, are presented. Supported by Chinese 973 Program (2010CB833002), the Strategic Priority Research Program on Space Science of the Chinese Academy of Science (CAS) (XDA04040202-4), the Joint Research Fund in Astronomy under cooperative agreement between the National Natural Science Foundation of China (NSFC) and CAS (U1531126) and 100 Talents Program of the Chinese Academy of Science
High-Speed On-Board Data Processing for Science Instruments
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace
2014-01-01
A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Rocket scientist, writer, born in Berlin, Germany. Inspired by reading a work by the space pioneer, HERMANN OBERTH, Ley founded the German Society for Space Travel (1927), enrolled WERNHER VON BRAUN, and helped develop the liquid-fuel rocket. Fled to the USA, and became a science writer, including science fiction and film scripts....
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
ERIC Educational Resources Information Center
Piwowar, Valentina; Barth, Victoria L.; Ophardt, Diemut; Thiel, Felicitas
2018-01-01
Scripted videos are based on a screenplay and are a viable and widely used tool for learning. Yet, reservations exist due to limited authenticity and high production costs. The present paper comprehensively describes a video production process for scripted videos on the topic of student misbehavior in the classroom. In a three step…
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
A Framework for Developing Scalable Geodesign Products
2017-09-01
Jean S. Noellsch Writer/Editor (CTR) Information Science and Knowledge Management Branch Engineer Research and Development Center ERDC/CERL MP-17...a design proposal with im- pact simulations informed by geographic context” (Flaxman 2009, 29). Stephen Ervin, a landscape architect, emphasizes the...communications technologies to foster collaborative, information -based design projects, and that depends upon timely feedback about impacts and implications of
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Kiesewetter, Jan; Fischer, Frank; Fischer, Martin R
2016-01-01
Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual's knowledge about how to interact with others in a social situation. METHOD— Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. RESULTS— Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88) than did novices (M = 13.80; SD = 4.47). Differences in superficial information were not found. Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. METHOD— Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. RESULTS— As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23) than did novices (M = 54.25; SD = 15.01). For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli. There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal collaboration scripts are not easily stored and retrieved in memory tasks other than in the own content domain.
Kiesewetter, Jan; Fischer, Frank; Fischer, Martin R.
2016-01-01
Background Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual’s knowledge about how to interact with others in a social situation. Method—Study 1 Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. Results—Study 1 Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88) than did novices (M = 13.80; SD = 4.47). Differences in superficial information were not found. Study 2 Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. Method—Study 2 Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. Results—Study 2 As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23) than did novices (M = 54.25; SD = 15.01). For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli. Discussion There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal collaboration scripts are not easily stored and retrieved in memory tasks other than in the own content domain. PMID:26866801
EarthServer: Cross-Disciplinary Earth Science Through Data Cube Analytics
NASA Astrophysics Data System (ADS)
Baumann, P.; Rossi, A. P.
2016-12-01
The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer [1] initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client [2] and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References: [1] Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106 [2] Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. [3] Baumann, Peter, et al. (2014) In Proc. 10th ICDM, 194-201. [4] Dumitru, A. et al. (2014) In Proc ACM SIGMOD Workshop on Data Analytics in the Cloud (DanaC'2014), 1-4.
Scalability Issues for Remote Sensing Infrastructure: A Case Study
Liu, Yang; Picard, Sean; Williamson, Carey
2017-01-01
For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure. PMID:28468262
SCDU Testbed Automated In-Situ Alignment, Data Acquisition and Analysis
NASA Technical Reports Server (NTRS)
Werne, Thomas A.; Wehmeier, Udo J.; Wu, Janet P.; An, Xin; Goullioud, Renaud; Nemati, Bijan; Shao, Michael; Shen, Tsae-Pyng J.; Wang, Xu; Weilert, Mark A.;
2010-01-01
In the course of fulfilling its mandate, the Spectral Calibration Development Unit (SCDU) testbed for SIM-Lite produces copious amounts of raw data. To effectively spend time attempting to understand the science driving the data, the team devised computerized automations to limit the time spent bringing the testbed to a healthy state and commanding it, and instead focus on analyzing the processed results. We developed a multi-layered scripting language that emphasized the scientific experiments we conducted, which drastically shortened our experiment scripts, improved their readability, and all-but-eliminated testbed operator errors. In addition to scientific experiment functions, we also developed a set of automated alignments that bring the testbed up to a well-aligned state with little more than the push of a button. These scripts were written in the scripting language, and in Matlab via an interface library, allowing all members of the team to augment the existing scripting language with complex analysis scripts. To keep track of these results, we created an easily-parseable state log in which we logged both the state of the testbed and relevant metadata. Finally, we designed a distributed processing system that allowed us to farm lengthy analyses to a collection of client computers which reported their results in a central log. Since these logs were parseable, we wrote query scripts that gave us an effortless way to compare results collected under different conditions. This paper serves as a case-study, detailing the motivating requirements for the decisions we made and explaining the implementation process.
Theater in Physics Teacher Education
ERIC Educational Resources Information Center
van den Berg, Ed
2009-01-01
Ten years ago I sat down with the first batch of students in our science/math teacher education program in the Philippines, then third-year students, and asked them what they could do for the opening of the new science building. One of them pulled a stack of papers out of his bag and put it in front of me: a complete script for a science play!…
2017-09-04
10 years @ 90% depth of discharge o Weight – 170 lb/374 kg PV panels: 12 panels with a 3.36 kW solar array capacity Generator: 10 kW TQG...lightweight thin-film PV panels ( solar modules or “ solar blankets”). These solar blankets were Door Sensor Figure 92: Temperature and Humidity Tripod...collected by various PV panels, and charging times for BB2590 batteries. 4.5.2 Operational Script The experimental nano-coated solar panel
Anibamine and its Analogues as Novel Anti-Prostate Cancer Agents
2009-06-01
expression of CCR5 and CCL5 was quantitated by SYBR-based Real-time PCR. The U6 gene was used as internal control . cDNA was synthesized using the iScript...15 Conclusion The major focus of the research pro ject will be the syntheses of the ligands we designed as chem okine receptor CCR5 antagonists with...provide useful information and insights for both basic research and drug design and hence are widely welcome by the science community. More specifically
V.A. I Animal Science Technical Information.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains two units of informational materials and transparency masters, with accompanying scripts, for teachers to use in an animal science course in vocational agriculture. Unit A on breeds and selection of livestock and poultry includes 13 topics covering beef cattle, dairy cattle, swine, horses, goats, sheep, and poultry. Unit B on…
Plant Science. IV-A-1 to IV-F-2. Basic V.A.I.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains six units of informational materials and transparency masters, with accompanying scripts, for teachers to use in a plant science course in vocational agriculture. Designed especially for use in Texas, the first unit introduces the course through the following topics: economic importance of major crops, major areas of…
Ditching the Script: Moving beyond "Automatic Thinking" in Introductory Political Science Courses
ERIC Educational Resources Information Center
Glover, Robert W.; Tagliarina, Daniel
2011-01-01
Political science is a challenging field, particularly when it comes to undergraduate teaching. If we are to engage in something more than uncritical ideological instruction, it demands from the student a willingness to approach alien political ideas with intellectual generosity. Yet, students within introductory classes often harbor inherited…
Static and Current Electricity.
ERIC Educational Resources Information Center
Schlenker, Richard M.; Murtha, Kathy T.
This is a copy of the script for the electrical relationships unit in an auto-tutorial physical science course for non-science majors, offered at the University of Maine at Orono. The unit includes 15 simple experiments designed to allow the student to discover various fundamental electrical relationships. The student has the option of reading the…
An automated process for generating archival data files from MATLAB figures
NASA Astrophysics Data System (ADS)
Wallace, G. M.; Greenwald, M.; Stillerman, J.
2016-10-01
A new directive from the White House Office of Science and Technology Policy requires that all publications supported by federal funding agencies (e.g. Department of Energy Office of Science, National Science Foundation) include machine-readable datasets for figures and tables. An automated script was developed at the PSFC to make this process easier for authors using the MATLAB plotting environment to create figures. All relevant data (x, y, z, errorbars) and metadata (line style, color, symbol shape, labels) are contained within the MATLAB .fig file created when saving a figure. The export_fig script extracts data and metadata from a .fig file and exports it into an HDF5 data file with no additional user input required. Support is included for a number of plot types including 2-D and 3-D line, contour, and surface plots, quiver plots, bar graphs, and histograms. This work supported by US Department of Energy cooperative agreement DE-FC02-99ER54512 using the Alcator C-Mod tokamak, a DOE Office of Science user facility.
KNIME for reproducible cross-domain analysis of life science data.
Fillbrunn, Alexander; Dietz, Christian; Pfeuffer, Julianus; Rahn, René; Landrum, Gregory A; Berthold, Michael R
2017-11-10
Experiments in the life sciences often involve tools from a variety of domains such as mass spectrometry, next generation sequencing, or image processing. Passing the data between those tools often involves complex scripts for controlling data flow, data transformation, and statistical analysis. Such scripts are not only prone to be platform dependent, they also tend to grow as the experiment progresses and are seldomly well documented, a fact that hinders the reproducibility of the experiment. Workflow systems such as KNIME Analytics Platform aim to solve these problems by providing a platform for connecting tools graphically and guaranteeing the same results on different operating systems. As an open source software, KNIME allows scientists and programmers to provide their own extensions to the scientific community. In this review paper we present selected extensions from the life sciences that simplify data exploration, analysis, and visualization and are interoperable due to KNIME's unified data model. Additionally, we name other workflow systems that are commonly used in the life sciences and highlight their similarities and differences to KNIME. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Kiesewetter, Jan; Gluza, Martin; Holzer, Matthias; Saravo, Barbara; Hammitzsch, Laura; Fischer, Martin R
2015-01-01
Collaboration as a key qualification in medical education and everyday routine in clinical care can substantially contribute to improving patient safety. Internal collaboration scripts are conceptualized as organized - yet adaptive - knowledge that can be used in specific situations in professional everyday life. This study examines the level of internalization of collaboration scripts in medicine. Internalization is understood as fast retrieval of script information. The goals of the current study were the assessment of collaborative information, which is part of collaboration scripts, and the development of a methodology for measuring the level of internalization of collaboration scripts in medicine. For the contrastive comparison of internal collaboration scripts, 20 collaborative novices (medical students in their final year) and 20 collaborative experts (physicians with specialist degrees in internal medicine or anesthesiology) were included in the study. Eight typical medical collaborative situations as shown on a photo or video were presented to the participants for five seconds each. Afterwards, the participants were asked to describe what they saw on the photo or video. Based on the answers, the amount of information belonging to a collaboration script (script-information) was determined and the time each participant needed for answering was measured. In order to measure the level of internalization, script-information per recall time was calculated. As expected, collaborative experts stated significantly more script-information than collaborative novices. As well, collaborative experts showed a significantly higher level of internalization. Based on the findings of this research, we conclude that our instrument can discriminate between collaboration novices and experts. It therefore can be used to analyze measures to foster subject-specific competency in medical education.
Types and Characteristics of Fish and Seafood Provisioning Scripts Used by Rural Midlife Adults.
Bostic, Stephanie M; Sobal, Jeffery; Bisogni, Carole A; Monclova, Juliet M
To examine rural New York State consumers' cognitive scripts for fish and seafood provisioning. A cross-sectional design with in-depth, semistructured interviews. Three rural New York State counties. Adults (n = 31) with diverse fish-related experiences were purposefully recruited. Scripts describing fish and seafood acquisition, preparation, and eating out. Interview transcripts were coded for emergent themes using Atlas.ti. Diagrams of scripts for each participant were constructed. Five types of acquisition scripts included quality-oriented, price-oriented, routine, special occasion, and fresh catch. Frequently used preparation scripts included everyday cooking, fast meal, entertaining, and grilling. Scripts for eating out included fish as first choice, Friday outing, convenient meals, special event, and travel meals. Personal values and resources influenced script development. Individuals drew on a repertoire of scripts based on their goals and resources at that time and in that place. Script characteristics of scope, flexibility, and complexity varied widely. Scripts incorporated goals, values, and resources into routine food behaviors. Understanding the characteristics of scripts provided insights about fish provisioning and opportunities to reduce the gap between current intake and dietary guidelines in this rural setting. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
JBrowse: A dynamic web platform for genome visualization and analysis
Buels, Robert; Yao, Eric; Diesh, Colin M.; ...
2016-04-12
Background: JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Results: Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. Conclusions: JBrowsemore » is a mature web application suitable for genome visualization and analysis.« less
Leveraging Globus to Support Access and Delivery of Scientific Data
NASA Astrophysics Data System (ADS)
Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.
2015-12-01
The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2014, 11,000 unique users downloaded greater than 1.1 petabytes of data from the RDA, and customized data products were prepared for more than 45,000 user-driven requests. In order to further support this increase in web download usage, the RDA has implemented the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for the research community. This presentation will highlight the technical functionality, challenges, and usefulness of the Globus data transfer service for accessing the RDA data holdings.
Examining classroom interactions related to difference in students' science achievement
NASA Astrophysics Data System (ADS)
Zady, Madelon F.; Portes, Pedro R.; Ochs, V. Dan
2003-01-01
The current study examines the cognitive supports that underlie achievement in science by using a cultural historical framework (L. S. Vygotsky (1934/1986), Thought and Language, MIT Press, Cambridge, MA.) and the activity setting (AS) construct (R. G. Tharp & R. Gallimore (1988), Rousing minds to life: Teaching, learning and schooling in social context, Cambridge University Press, Cambridge, MA.) with its five features: personnel, motivations, scripts, task demands, and beliefs. Observations were made of the classrooms of seventh-grade science students, 32 of whom had participated in a prior achievement-related parent-child interaction or home study (P. R. Portes, M. F. Zady, & R. M. Dunham (1998), Journal of Genetic Psychology, 159, 163-178). The results of a quantitative analysis of classroom interaction showed two features of the AS: personnel and scripts. The qualitative field analysis generated four emergent phenomena related to the features of the AS that appeared to influence student opportunity for conceptual development. The emergent phenomenon were science activities, the building of learning, meaning in lessons, and the conflict over control. Lastly, the results of the two-part classroom study were compared to those of the home science AS of high and low achievers. Mismatches in the AS features in the science classroom may constrain the opportunity to learn. Educational implications are discussed.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J
2010-03-01
PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.
2010-01-01
Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306
Couple decision making and use of cultural scripts in Malawi.
Mbweza, Ellen; Norr, Kathleen F; McElmurry, Beverly
2008-01-01
To examine the decision-making processes of husband and wife dyads in matrilineal and patrilineal marriage traditions of Malawi in the areas of money, food, pregnancy, contraception, and sexual relations. Qualitative grounded theory using simultaneous interviews of 60 husbands and wives (30 couples). Data were analyzed according to the guidelines of simultaneous data collection and analysis. The analysis resulted in development of core categories and categories of decision-making process. Data matrixes were used to identify similarities and differences within couples and across cases. Most couples reported using a mix of final decision-making approaches: husband-dominated, wife-dominated, and shared. Gender based and nongender based cultural scripts provided rationales for their approaches to decision making. Gender based cultural scripts (husband-dominant and wife-dominant) were used to justify decision-making approaches. Non-gender based cultural scripts (communicating openly, maintaining harmony, and children's welfare) supported shared decision making. Gender based cultural scripts were used in decision making more often among couples from the district with a patrilineal marriage tradition and where the husband had less than secondary school education and was not formally employed. Nongender based cultural scripts to encourage shared decision making can be used in designing culturally tailored reproductive health interventions for couples. Nurses who work with women and families should be aware of the variations that occur in actual couple decision-making approaches. Shared decision making can be used to encourage the involvement of men in reproductive health programs.
TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data.
Clark, Lindsay V; Sacks, Erik J
2016-01-01
In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume hard drive space with intermediate files, and does not require programming skill to use.
Novel Technology for Treating Individuals with Aphasia and Concomitant Cognitive Deficits
Cherney, Leora R.; Halper, Anita S.
2009-01-01
Purpose This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-the-art computer software for training conversational scripts. Method Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. Results There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of script-related words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Conclusion Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits. PMID:19158062
Harris, Thomas R; Brophy, Sean P
2005-09-01
Vanderbilt University, Northwestern University, the University of Texas and the Harvard/MIT Health Sciences Technology Program have collaborated since 1999 to develop means to improve bioengineering education. This effort, funded by the National Science Foundation as the VaNTH Engineering Research Center in Bioengineering Educational Technologies, has sought a synthesis of learning science, learning technology, assessment and the domains of bioengineering in order to improve learning by bioengineering students. Research has shown that bioengineering educational materials may be designed to emphasize challenges that engage the student and, when coupled with a learning cycle and appropriate technologies, can lead to improvements in instruction.
Collaboration Scripts for Enhancing Metacognitive Self-Regulation and Mathematics Literacy
ERIC Educational Resources Information Center
Chen, Cheng-Huan; Chiu, Chiung-Hui
2016-01-01
This study designed a set of computerized collaboration scripts for multi-touch supported collaborative design-based learning and evaluated its effects on multiple aspects of metacognitive self-regulation in terms of planning and controlling and mathematical literacy achievement at higher and lower levels. The computerized scripts provided a…
Personal Inquiry: Orchestrating Science Investigations within and beyond the Classroom
ERIC Educational Resources Information Center
Sharples, Mike; Scanlon, Eileen; Ainsworth, Shaaron; Anastopoulou, Stamatina; Collins, Trevor; Crook, Charles; Jones, Ann; Kerawalla, Lucinda; Littleton, Karen; Mulholland, Paul; O'Malley, Claire
2015-01-01
A central challenge for science educators is to enable young people to act as scientists by gathering and assessing evidence, conducting experiments, and engaging in informed debate. We report the design of the nQuire toolkit, a system to support scripted personal inquiry learning, and a study of its use with school students ages 11-14. This…
Soil Science. III-A-1 to III-D-4. Basic V.A.I.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This packet contains four units of informational materials and transparency masters, with accompanying scripts, for teachers to use in a soil science course in vocational agriculture. Designed especially for use in Texas, the first unit discusses the importance of soils. In the second unit, the nature and properties of soils are discussed,…
Geospatial considerations for a multiorganizational, landscape-scale program
O'Donnell, Michael S.; Assal, Timothy J.; Anderson, Patrick J.; Bowen, Zachary H.
2013-01-01
Geospatial data play an increasingly important role in natural resources management, conservation, and science-based projects. The management and effective use of spatial data becomes significantly more complex when the efforts involve a myriad of landscape-scale projects combined with a multiorganizational collaboration. There is sparse literature to guide users on this daunting subject; therefore, we present a framework of considerations for working with geospatial data that will provide direction to data stewards, scientists, collaborators, and managers for developing geospatial management plans. The concepts we present apply to a variety of geospatial programs or projects, which we describe as a “scalable framework” of processes for integrating geospatial efforts with management, science, and conservation initiatives. Our framework includes five tenets of geospatial data management: (1) the importance of investing in data management and standardization, (2) the scalability of content/efforts addressed in geospatial management plans, (3) the lifecycle of a geospatial effort, (4) a framework for the integration of geographic information systems (GIS) in a landscape-scale conservation or management program, and (5) the major geospatial considerations prior to data acquisition. We conclude with a discussion of future considerations and challenges.
RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.
Varghese, Blesson; Patel, Ishan; Barker, Adam
2015-01-01
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
Automatic script identification from images using cluster-based templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Kerns, L.; Kelly, P.
We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less
An integrated semiconductor device enabling non-optical genome sequencing.
Rothberg, Jonathan M; Hinz, Wolfgang; Rearick, Todd M; Schultz, Jonathan; Mileski, William; Davey, Mel; Leamon, John H; Johnson, Kim; Milgrew, Mark J; Edwards, Matthew; Hoon, Jeremy; Simons, Jan F; Marran, David; Myers, Jason W; Davidson, John F; Branting, Annika; Nobile, John R; Puc, Bernard P; Light, David; Clark, Travis A; Huber, Martin; Branciforte, Jeffrey T; Stoner, Isaac B; Cawley, Simon E; Lyons, Michael; Fu, Yutao; Homer, Nils; Sedova, Marina; Miao, Xin; Reed, Brian; Sabina, Jeffrey; Feierstein, Erika; Schorn, Michelle; Alanjary, Mohammad; Dimalanta, Eileen; Dressman, Devin; Kasinskas, Rachel; Sokolsky, Tanya; Fidanza, Jacqueline A; Namsaraev, Eugeni; McKernan, Kevin J; Williams, Alan; Roth, G Thomas; Bustillo, James
2011-07-20
The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome.
Script identification from images using cluster-based templates
Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.
1998-12-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.
Script identification from images using cluster-based templates
Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.
1998-01-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
Background When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. Objectives To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. Methods A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. Results The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Conclusions Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction. PMID:24349257
Script-like attachment representations in dreams containing current romantic partners.
Selterman, Dylan; Apetroaia, Adela; Waters, Everett
2012-01-01
Recent research has demonstrated parallels between romantic attachment styles and general dream content. The current study examined partner-specific attachment representations alongside dreams that contained significant others. The general prediction was that dreams would follow the "secure base script," and a general correspondence would emerge between secure attachment cognitions in waking life and in dreams. Sixty-one undergraduate student participants in committed dating relationships of six months duration or longer completed the Secure Base Script Narrative Assessment at Time 1, and then completed a dream diary for 14 consecutive days. Blind coders scored dreams that contained significant others using the same criteria for secure base content in laboratory narratives. Results revealed a significant association between relationship-specific attachment security and the degree to which dreams about romantic partners followed the secure base script. The findings illuminate our understanding of mental representations with regards to specific attachment figures. Implications for attachment theory and clinical applications are discussed.
A High Efficiency System for Science Instrument Commanding for the Mars Global Surveyor Mission
NASA Technical Reports Server (NTRS)
Jr., R. N. Brooks
1995-01-01
The Mars Global Surveyor (MGS) mission will return to Mars to re- cover most of the science lost when the ill fated Mars Observer space- craft suffered a catastrophic anomaly in its propulsion system and did not go into orbit. Described in detail are the methods employed by the MGS Sequence Team to accelerate science command processing by using standard command generation process and standard UNIX control scripts.
Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2015-06-01
Web-based research is becoming ubiquitous in the behavioral sciences, facilitated by convenient, readily available participant pools and relatively straightforward ways of running experiments: most recently, through the development of the HTML5 standard. Although in most studies participants give untimed responses, there is a growing interest in being able to record response times online. Existing data on the accuracy and cross-machine variability of online timing measures are limited, and generally they have compared behavioral data gathered on the Web with similar data gathered in the lab. For this article, we took a more direct approach, examining two ways of running experiments online-Adobe Flash and HTML5 with CSS3 and JavaScript-across 19 different computer systems. We used specialist hardware to measure stimulus display durations and to generate precise response times to visual stimuli in order to assess measurement accuracy, examining effects of duration, browser, and system-to-system variability (such as across different Windows versions), as well as effects of processing power and graphics capability. We found that (a) Flash and JavaScript's presentation and response time measurement accuracy are similar; (b) within-system variability is generally small, even in low-powered machines under high load; (c) the variability of measured response times across systems is somewhat larger; and (d) browser type and system hardware appear to have relatively small effects on measured response times. Modeling of the effects of this technical variability suggests that for most within- and between-subjects experiments, Flash and JavaScript can both be used to accurately detect differences in response times across conditions. Concerns are, however, noted about using some correlational or longitudinal designs online.
ACME Priority Metrics (A-PRIME)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Zender, Charlie; Van Roekel, Luke
A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.
NASA Astrophysics Data System (ADS)
Stender, Anita; Brückmann, Maja; Neumann, Knut
2017-08-01
This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson planning: When planning lessons, teachers use their TSPK to create lesson plans. The implementation of these lesson plans and teachers' reflection upon them lead to their improvement. Gradually, successful lesson plans are mentally stored as teaching scripts and can easily be retrieved during instruction. This process is affected by teacher's beliefs, motivation and self-regulation. In order to examine the influence of TSPK on teaching scripts as well as the moderating effects of beliefs, motivation and self-regulation, we conducted a cross-sectional study with n = 49 in-service teachers in physics. The TSPK, beliefs, motivation, self-regulation and the quality of teaching scripts of in-service teachers were assessed by using an online questionnaire adapted to teaching the force concept and Newton's law for 9th grade instruction. Based on the measurement of the quality of teaching scripts, the results provide evidence that TSPK influences the quality of teaching scripts. Motivation and self-regulation moderate this influence.
Turan, Bulent
2016-01-01
People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health.
Turan, Bulent
2016-01-01
People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health. PMID:26973562
Toward cost-effective solar energy use.
Lewis, Nathan S
2007-02-09
At present, solar energy conversion technologies face cost and scalability hurdles in the technologies required for a complete energy system. To provide a truly widespread primary energy source, solar energy must be captured, converted, and stored in a cost-effective fashion. New developments in nanotechnology, biotechnology, and the materials and physical sciences may enable step-change approaches to cost-effective, globally scalable systems for solar energy use.
Near-line Archive Data Mining at the Goddard Distributed Active Archive Center
NASA Astrophysics Data System (ADS)
Pham, L.; Mack, R.; Eng, E.; Lynnes, C.
2002-12-01
NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.
Neural substrates of embodied natural beauty and social endowed beauty: An fMRI study.
Zhang, Wei; He, Xianyou; Lai, Siyan; Wan, Juan; Lai, Shuxian; Zhao, Xueru; Li, Darong
2017-08-02
What are the neural mechanisms underlying beauty based on objective parameters and beauty based on subjective social construction? This study scanned participants with fMRI while they performed aesthetic judgments on concrete pictographs and abstract oracle bone scripts. Behavioral results showed both pictographs and oracle bone scripts were judged to be more beautiful when they referred to beautiful objects and positive social meanings, respectively. Imaging results revealed regions associated with perceptual, cognitive, emotional and reward processing were commonly activated both in beautiful judgments of pictographs and oracle bone scripts. Moreover, stronger activations of orbitofrontal cortex (OFC) and motor-related areas were found in beautiful judgments of pictographs, whereas beautiful judgments of oracle bone scripts were associated with putamen activity, implying stronger aesthetic experience and embodied approaching for beauty were elicited by the pictographs. In contrast, only visual processing areas were activated in the judgments of ugly pictographs and negative oracle bone scripts. Results provide evidence that the sense of beauty is triggered by two processes: one based on the objective parameters of stimuli (embodied natural beauty) and the other based on the subjective social construction (social endowed beauty).
Primary relationship scripts among lower-income, African American young adults.
Eyre, Stephen L; Flythe, Michelle; Hoffman, Valerie; Fraser, Ashley E
2012-06-01
Research on romantic relationships among lower income, African American young adults has mostly focused on problem behaviors, and has infrequently documented nonpathological relationship processes that are widely studied among middle-class college students, their wealthier and largely European American counterparts [Journal of Black Studies 39 (2009) 570]. To identify nonpathological cultural concepts related to heterosexual romantic relationships, we interviewed 144 low to low-mid income, African American young adults aged 19-22 from the San Francisco Bay Area, CA, metropolitan Chicago, IL, and Greater Birmingham, AL. We identified 12 gender-shared scripts related to the romantic relationship in areas of (1) defining the relationship, (2) processes of joining, (3) maintaining balance, and (4) modulating conflict. Understanding romantic relationship scripts is important as successful romantic relationships are associated with improved mental and physical health among lower income individuals as compared with individuals without romantic partners [Social Science & Medicine 52 (2001) 1501]. © FPI, Inc.
Novel technology for treating individuals with aphasia and concomitant cognitive deficits.
Cherney, Leora R; Halper, Anita S
2008-01-01
This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-theart computer software for training conversational scripts. Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of scriptrelated words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
ERIC Educational Resources Information Center
Groskreutz, Mark P.; Peters, Amy; Groskreutz, Nicole C.; Higbee, Thomas S.
2015-01-01
Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel…
The Use of Interactive Whiteboards in Teaching Non-Roman Scripts
ERIC Educational Resources Information Center
Tozcu, Anjel
2008-01-01
This study explores the use of the interactive whiteboards in teaching the non-Latin based orthographies of Hindi, Pashto, Dari, Persian (Farsi), and Hebrew. All these languages use non-roman scripts, and except for Hindi, they are cursive. Thus, letters within words are connected and for beginners the script may look quite complicated,…
A video depicting resuscitation did not impact upon patients' decision-making.
Richardson-Royer, Caitlin; Naqvi, Imran; Riffel, Christopher; Harvey, Lawrence; Smith, Domonique; Ayalew, Dagmawe; Motayar, Nasim; Amoateng-Adjepong, Yaw; Manthous, Constantine A
2018-01-01
Previous studies have demonstrated that video of and scripted information about cardiopulmonary resuscitation (CPR) can be deployed during clinician-patient end-of-life discussions. Few studies, however, examine whether video adds to verbal information-sharing. We hypothesized that video augments script-only decision-making. Patients aged >65 years admitted to hospital wards were randomized to receive evidence-based information ("script") vs. script plus video of simulated CPR and intubation. Patients' decisions registered in the hospital record, by hospital discharge were compared for the two groups. Fifty script-only intervention patients averaging 77.7 years were compared to 50 script+video patients with a mean age of 74.7 years. Eleven of 50 (22%) in each group declined CPR; and an additional three (script) vs. four (script+video) refused intubation for respiratory failure. There were no differences in sex, self-reported health trajectory, functional limitations, length of stay, or mortality associated with decisions. The rate at which verbally informed hospitalized elders opted out of resuscitation was not impacted by adding a video depiction of CPR.
Hussen, Sophia A; Bowleg, Lisa; Sangaramoorthy, Thurka; Malebranche, David J
2012-01-01
Black men in the USA experience disproportionately high rates of HIV infection, particularly in the Southeastern part of the country. We conducted 90 qualitative in-depth interviews with Black men living in the state of Georgia and analysed the transcripts using Sexual Script Theory to: (1) characterise the sources and content of sexual scripts that Black men were exposed to during their childhood and adolescence and (2) describe the potential influence of formative scripts on adult HIV sexual risk behaviour. Our analyses highlighted salient sources of cultural scenarios (parents, peers, pornography, sexual education and television), interpersonal scripts (early sex- play, older female partners, experiences of child abuse) and intrapsychic scripts that participants described. Stratification of participant responses based on sexual-risk behaviour revealed that lower- and higher-risk men described exposure to similar scripts during their formative years; however, lower-risk men reported an ability to cognitively process and challenge the validity of risk-promoting scripts that they encountered. Implications for future research are discussed.
Translating research findings into community based theatre: More than a dead man's wife.
Feldman, Susan; Hopgood, Alan; Dickins, Marissa
2013-12-01
Increasingly, qualitative scholars in health and social sciences are turning to innovative strategies as a way of translating research findings into informative, accessible and enjoyable forms for the community. The aim of this article is to describe how the research findings of a doctoral thesis - a narrative study about 58 older women's experiences of widowhood - were translated into a unique and professionally developed script to form the basis for a successful theatrical production that has travelled extensively within Australia. This article reports on the process of collaboration between a researcher, a highly regarded Australian actor/script writer and an ensemble of well-known and experienced professional actors. Together the collaborating partners translated the research data and findings about growing older and 'widowhood' into a high quality theatre production. In particular, we argue in this paper that research-based theatre is an appropriate medium for communicating research findings about important life issues of concern to older people in a safe, affirming and entertaining manner. By outlining the process of translating research findings into theatre we hope to show that there is a real value in this translation approach for both researcher and audience alike. © 2013.
Scalable real space pseudopotential density functional codes for materials in the exascale regime
NASA Astrophysics Data System (ADS)
Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack
Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).
The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access
NASA Astrophysics Data System (ADS)
Schuster, D.; Worley, S. J.
2013-12-01
The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.
Status of the JWST Science Instrument Payload
NASA Technical Reports Server (NTRS)
Greenhouse, Matt
2016-01-01
The James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) system consists of five sensors (4 science): Mid-Infrared Instrument (MIRI), Near Infrared Imager and Slitless Spectrograph (NIRISS), Fine Guidance Sensor (FGS), Near InfraRed Camera (NIRCam), Near InfraRed Spectrograph (NIRSpec); and nine instrument support systems: Optical metering structure system, Electrical Harness System; Harness Radiator System, ISIM Electronics Compartment, ISIM Remote Services Unit, Cryogenic Thermal Control System, Command and Data Handling System, Flight Software System, Operations Scripts System.
A scalable healthcare information system based on a service-oriented architecture.
Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei
2011-06-01
Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.
Lifeomics leads the age of grand discoveries.
He, Fuchu
2013-03-01
When our knowledge of a field accumulates to a certain level, we are bound to see the rise of one or more great scientists. They will make a series of grand discoveries/breakthroughs and push the discipline into an 'age of grand discoveries'. Mathematics, geography, physics and chemistry have all experienced their ages of grand discoveries; and in life sciences, the age of grand discoveries has appeared countless times since the 16th century. Thanks to the ever-changing development of molecular biology over the past 50 years, contemporary life science is once again approaching its breaking point and the trigger for this is most likely to be 'lifeomics'. At the end of the 20th century, genomics wrote out the 'script of life'; proteomics decoded the script; and RNAomics, glycomics and metabolomics came into bloom. These 'omics', with their unique epistemology and methodology, quickly became the thrust of life sciences, pushing the discipline to new high. Lifeomics, which encompasses all omics, has taken shape and is now signalling the dawn of a new era, the age of grand discoveries.
Running Gaussian16 Software Jobs on the Peregrine System | High-Performance
, parallel setup is taken care of automatically based on settings in the PBS script example below. Previous filesystem called /dev/shm. This scratch space is set automatically by the example script below. The Gaussian system. An example script for batch submission is given below. #!/bin/bash #PBS -l nodes=2 #PBS -l
ERIC Educational Resources Information Center
Turchik, Jessica A.; Probst, Danielle R.; Irvin, Clinton R.; Chau, Minna; Gidycz, Christine A.
2009-01-01
Although script theory has been applied to sexual assault (e.g., H. Frith & C. Kitzinger, 2001; A. S. Kahn, V. A. Andreoli Mathie, & C. Torgler, 1994), women's scripts of rape have not been examined in relation to predicting sexual victimization experiences. The purpose of the current study was to examine how elements of women's sexual assault…
Cultural scripts for a good death in Japan and the United States: similarities and differences.
Long, Susan Orpett
2004-03-01
Japan and the United States are both post-industrial societies, characterised by distinct trajectories of dying. Both contain multiple "cultural scripts" of the good death. Seale (Constructing Death: the Sociology of Dying and Bereavement, Cambridge University Press, Cambridge, 1998) has identified at least four "cultural scripts", or ways to die well, that are found in contemporary anglophone countries: modern medicine, revivalism, an anti-revivalist script and a religious script. Although these scripts can also be found in Japan, different historical experiences and religious traditions provide a context in which their content and interpretation sometimes differ from those of the anglophone countries. To understand ordinary people's ideas about dying well and dying poorly, we must recognise not only that post-industrial society offers multiple scripts and varying interpretive frameworks, but also that people actively select from among them in making decisions and explaining their views. Moreover, ideas and metaphors may be based on multiple scripts simultaneously or may offer different interpretations for different social contexts. Based on ethnographic fieldwork in both countries, this paper explores the metaphors that ordinary patients and caregivers draw upon as they use, modify, combine or ignore these cultural scripts of dying. Ideas about choice, time, place and personhood, elements of a good death that were derived inductively from interviews, are described. These Japanese and American data suggest somewhat different concerns and assumptions about human life and the relation of the person to the wider social world, but indicate similar concerns about the process of medicalised dying and the creation of meaning for those involved. While cultural differences do exist, they cannot be explained by reference to 'an American' and 'a Japanese' way to die. Rather, the process of creating and maintaining cultural scripts requires the active participation of ordinary people as they in turn respond to the constraints of post-industrial technology, institutions, demographics and notions of self.
Attachment to Mother and Father at Transition to Middle Childhood.
Di Folco, Simona; Messina, Serena; Zavattini, Giulio Cesare; Psouni, Elia
2017-01-01
The present study investigated concordance between representations of attachment to mother and attachment to father, and convergence between two narrative-based methods addressing these representations in middle childhood: the Manchester Child Attachment Story Task (MCAST) and the Secure Base Script Test (SBST). One hundred and twenty 6-year-old children were assessed by separate administrations of the MCAST for mother and father, respectively, and results showed concordance of representations of attachment to mother and attachment to father at age 6.5 years. 75 children were additionally tested about 12 months later, with the SBST, which assesses scripted knowledge of secure base (and safe haven), not differentiating between mother and father attachment relationships. Concerning attachment to father, dichotomous classifications (MCAST) and a continuous dimension capturing scripted secure base knowledge (MCAST) converged with secure base scriptedness (SBST), yet we could not show the same pattern of convergence concerning attachment to mother. Results suggest some convergence between the two narrative methods of assessment of secure base script but also highlight complications when using the MCAST for measuring attachment to father in middle childhood.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Sparks, Lauren A; Trentacosta, Christopher J; Owusu, Erika; McLear, Caitlin; Smith-Darden, Joanne
2018-08-01
Secure attachment relationships have been linked to social competence in at-risk children. In the current study, we examined the role of parent secure base scripts in predicting at-risk kindergarteners' social competence. Parent representations of secure attachment were hypothesized to mediate the relationship between lower family cumulative risk and children's social competence. Participants included 106 kindergarteners and their primary caregivers recruited from three urban charter schools serving low-income families as a part of a longitudinal study. Lower levels of cumulative risk predicted greater secure attachment representations in parents, and scores on the secure base script assessment predicted children's social competence. An indirect relationship between lower cumulative risk and kindergarteners' social competence via parent secure base script scores was also supported. Parent script-based representations of the attachment relationship appear to be an important link between lower levels of cumulative risk and low-income kindergarteners' social competence. Implications of these findings for future interventions are discussed.
Groh, Ashley M; Roisman, Glenn I; Haydon, Katherine C; Bost, Kelly; McElwain, Nancy; Garcia, Leanna; Hester, Colleen
2015-11-01
This study examined the extent to which secure base script knowledge-reflected in the ability to generate narratives in which attachment-relevant events are encountered, a clear need for assistance is communicated, competent help is provided and accepted, and the problem is resolved-is associated with mothers' electrophysiological, subjective, and observed emotional responses to an infant distress vocalization. While listening to an infant crying, mothers (N = 108, M age = 34 years) lower on secure base script knowledge exhibited smaller shifts in relative left (vs. right) frontal EEG activation from rest, reported smaller reductions in feelings of positive emotion from rest, and expressed greater levels of tension. Findings indicate that lower levels of secure base script knowledge are associated with an organization of emotional responding indicative of a less flexible and more emotionally restricted response to infant distress. Discussion focuses on the contribution of mothers' attachment representations to their ability to effectively manage emotional responding to infant distress in a manner expected to support sensitive caregiving.
The iPlant collaborative: cyberinfrastructure for enabling data to discovery for the life sciences
USDA-ARS?s Scientific Manuscript database
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identify management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning m...
Editorial: from plant biotechnology to bio-based products.
Stöger, Eva
2013-10-01
From plant biotechnology to bio-based products - this Special Issue of Biotechnology Journal is dedicated to plant biotechnology and is edited by Prof. Eva Stöger (University of Natural Resources and Life Sciences, Vienna, Austria). The Special Issue covers a wide range of topics in plant biotechnology, including metabolic engineering of biosynthesis pathways in plants; taking advantage of the scalability of the plant system for the production of innovative materials; as well as the regulatory challenges and society acceptance of plant biotechnology. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pedretti, Alessandro; Mazzolari, Angelica; Vistoli, Giulio
2018-05-21
The manuscript describes WarpEngine, a novel platform implemented within the VEGA ZZ suite of software for performing distributed simulations both in local and wide area networks. Despite being tailored for structure-based virtual screening campaigns, WarpEngine possesses the required flexibility to carry out distributed calculations utilizing various pieces of software, which can be easily encapsulated within this platform without changing their source codes. WarpEngine takes advantages of all cheminformatics features implemented in the VEGA ZZ program as well as of its largely customizable scripting architecture thus allowing an efficient distribution of various time-demanding simulations. To offer an example of the WarpEngine potentials, the manuscript includes a set of virtual screening campaigns based on the ACE data set of the DUD-E collections using PLANTS as the docking application. Benchmarking analyses revealed a satisfactory linearity of the WarpEngine performances, the speed-up values being roughly equal to the number of utilized cores. Again, the computed scalability values emphasized that a vast majority (i.e., >90%) of the performed simulations benefit from the distributed platform presented here. WarpEngine can be freely downloaded along with the VEGA ZZ program at www.vegazz.net .
Universal Plug-n-Play Sensor Integration for Advanced Navigation
2012-03-22
Orientation (top) and Angular Velocity (bottom) . . . . . . . . . 79 IV.6 Execution of AHRS script with roscore running on separate machine . . . . . . 80...single host case only with two hosts in this scenario. The script is running 78 Figure IV.5: Plot of AHRS Orientation (top) and Angular Velocity (bottom...Component-Based System using ROS . . . . . . . . . 59 3.6 Autonomous Behavior Using Scripting . . . . . . . . . . . . . . . . . . . . 60 3.6.1 udev
James D. Haywood; Finis Harris
2002-01-01
This presentation on prescribed burning is a cooperative effort of the USDA Forest Service, Southern Research Station and Kisatchie National Forest; Louisiana State University Agricultural Center; and the Joint Fire Science Program. The CD includes three methods of delivery: slides, Power Point presentation, and script only.
First Science Verification of the VLA Sky Survey Pilot
NASA Astrophysics Data System (ADS)
Cavanaugh, Amy
2017-01-01
My research involved analyzing test images by Steve Myers for the upcoming VLA Sky Survey. This survey will cover the entire sky visible from the VLA site in S band (2-4 GHz). The VLA will be in B configuration for the survey, as it was when the test images were produced, meaning a resolution of approximately 2.5 arcseconds. Conducted using On-the-Fly mode, the survey will have a speed of approximately 20 deg2 hr-1 (including overhead). New Python imaging scripts are being developed and improved to process the VLASS images. My research consisted of comparing a continuum test image over S band (from the new imaging scripts) to two previous images of the same region of the sky (from the CNSS and FIRST surveys), as well as comparing the continuum image to single spectral windows (from the new imaging scripts and of the same sky region). By comparing our continuum test image to images from CNSS and FIRST, we tested on-the-Fly mode and the imaging script used to produce our images. Another goal was to test whether individual spectral windows could be used in combination to calculate spectral indices close to those produced over S band (based only on our continuum image). Our continuum image contained 64 sources as opposed to the 99 sources found in the CNSS image. The CNSS image also had lower noise level (0.095 mJy/beam compared to 0.119 mJy/beam). Additionally, when our continuum image was compared to the CNSS image, separation showed no dependence on total flux density (in our continuum image). At lower flux densities, sources in our image were brighter than the same ones in the CNSS image. When our continuum image was compared to the FIRST catalog, the spectral index difference showed no dependence on total flux (in our continuum image). In conclusion, the quality of our images did not completely match the quality of the CNSS and FIRST images. More work is needed in developing the new imaging scripts.
Archival Research Capabilities of the WFIRST Data Set
NASA Astrophysics Data System (ADS)
Szalay, Alexander
WFIRST's unique combination of a large (~0.3 deg2) field of view and HST-like angular resolution and sensitivity in the near infrared will produce spectacular new insights into the origins of stars, galaxies, and structure in the cosmos. We propose a WFIRST Archive Science Investigation Team (SIT-F) to define an archival, query, and analysis system that will enable scientific discovery in all relevant areas of astrophysics and maximize the overall scientific yield of the mission. Guest investigators (GIs), guest observers (GOs), the WFIRST SIT's, WFIRST Science Center(s), and astronomers using data from other surveys will all benefit from the extensive, easy, fast and reliable use of the WFIRST archives. We propose to develop the science requirements for the archive and work to understand its interactions with other elements of the WFIRST mission. To accomplish this, we will conduct case studies to derive performance requirements for the WFIRST archives. These will clarify what is needed for GIs to make important scientific discoveries across a broad range of astrophysics. While other SITs will primarily address the science capabilities of the WFIRST instruments, we will look ahead to the science enabling capabilities of the WFIRST archives. We will demonstrate how the archive can be optimized to take advantage of the extraordinary science capabilities of the WFIRST instruments as well as major space and ground observatories to maximize the science return of the mission. We will use the "20 queries" methodology, formulated by Jim Gray, to cover the most important science analysis patterns and use these to establish the performance required of the WFIRST archive. The case studies will be centered on studying galaxy evolution as a function of cosmic time, environment and intrinsic properties. The analyses will require massive angular and spatial cross correlations between key galaxy properties to search for new fundamental scaling relations that may only become apparent when exploring a database of 108 galaxies with multiband photometry and grism spectroscopy. The case studies will require (i) the creation of a unified WFIRST object catalog consisting of data cross-matched to external catalogs, (ii) an easy-to-access, scalable database, utilizing the latest data discovery and querying techniques, (iii) in situ analyses of large and/or complex data, (iv) identification of links to supporting data and enabling queries spanning WFIRST and other databases, (v) combining simulations with modeling software. To accomplish these objectives, we will prototype a system capable of executing complex user-defined scripts including database access to a shared computational facility with tools for joining WFIRST to other surveys, also enabling comparisons to physical models. Our organizational plan divides the work into several general areas where our team members have specific expertise: (a) apply the 20 queries methodology to derive performance and functionality requirements, (b) develop a practical interactive server-side query system, built on our SDSS experience, (c) apply advanced cross-matching techniques, (d) create mock WFIRST imaging and grism data, (e) develop high level cross correlation tools, (e) optimize scripting systems using high-level languages (iPython), (f) perform close integration of cosmological simulations with observational data, (g) apply advanced machine learning techniques. Our efforts will be coordinated with the WFIRST Science Center (WSC), the other SITs, and the broader community in a manner consistent with direction and review of the Project Office. We will publish our results as milestones are reached, and issue progress reports on a regular basis. We will represent SIT-F at all relevant meetings including meetings of the other SITs (SITs A-E), and participate in "Big Data" conferences to interact with others in the field and learn new techniques that might be applicable to WFIRST.
NASA Astrophysics Data System (ADS)
Kong, Fande; Cai, Xiao-Chuan
2017-07-01
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.
Kong, Fande; Cai, Xiao-Chuan
2017-03-24
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Landgraf, Steffen; von Treskow, Isabella
2017-01-01
Hardly any subjects enjoy greater – public or private – interest than the art of flirtation and seduction. However, interpersonal approach behavior not only paves the way for sexual interaction and reproduction, but it simultaneously integrates non-sexual psychobiological and cultural standards regarding consensus and social norms. In the present paper, we use script theory, a concept that extends across psychological and cultural science, to assess behavioral options during interpersonal approaches. Specifically, we argue that approaches follow scripted event sequences that entail ambivalence as an essential communicative element. On the one hand, ambivalence may facilitate interpersonal approaches by maintaining and provoking situational uncertainty, so that the outcome of an action – even after several approaches and dates – remains ambiguous. On the other hand, ambivalence may increase the risk for sexual aggression or abuse, depending on the individual’s abilities, the circumstances, and the intentions of the interacting partners. Recognizing latent sequences of sexually aggressive behavior, in terms of their rigid structure and behavioral options, may thus enable individuals to use resources efficiently, avoid danger, and extricate themselves from assault situations. We conclude that interdisciplinary script knowledge about ambivalence as a core component of the seduction script may be helpful for counteracting subtly aggressive intentions and preventing sexual abuse. We discuss this with regard to the nature-nurture debate as well as phylogenetic and ontogenetic aspects of interpersonal approach behavior and its medial implementation. PMID:28119656
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suresh, Niraj; Stephens, Sean A.; Adams, Lexor
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and forest management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving the plant. X ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. Our group at the Environmental Molecular Sciences Laboratory (EMSL) has developed an XCT-based tool to image and quantitatively analyze plant root structures in their native soil environment. XCT data collected on amore » Prairie dropseed (Sporobolus heterolepis) specimen was used to visualize its root structure. A combination of open-source software RooTrak and DDV were employed to segment the root from the soil, and calculate its isosurface, respectively. Our own computer script named 3DRoot-SV was developed and used to calculate root volume and surface area from a triangular mesh. The process utilizing a unique combination of tools, from imaging to quantitative root analysis, including the 3DRoot-SV computer script, is described.« less
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
NASA Astrophysics Data System (ADS)
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Graph Mining Meets the Semantic Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Dustin Yewell
This document is a white paper marketing proposal for Echo™ is a data analysis platform designed for efficient, robust, and scalable creation and execution of complex workflows. Echo’s analysis management system refers to the ability to track, understand, and reproduce workflows used for arriving at results and decisions. Echo improves on traditional scripted data analysis in MATLAB, Python, R, and other languages to allow analysts to make better use of their time. Additionally, the Echo platform provides a powerful data management and curation solution allowing analysts to quickly find, access, and consume datasets. After two years of development and amore » first release in early 2016, Echo is now available for use with many data types in a wide range of application domains. Echo provides tools that allow users to focus on data analysis and decisions with confidence that results are reported accurately.« less
Simulation for Dynamic Situation Awareness and Prediction III
2010-03-01
source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes
Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction
NASA Astrophysics Data System (ADS)
Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.
2003-01-01
In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''
Illness script development in pre-clinical education through case-based clinical reasoning training
Keemink, Yvette; van Dijk, Savannah; ten Cate, Olle
2018-01-01
Objectives To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. Methods In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students’ illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Results Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=–0.006, p=NS). Conclusions The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability. PMID:29428911
Illness script development in pre-clinical education through case-based clinical reasoning training.
Keemink, Yvette; Custers, Eugene J F M; van Dijk, Savannah; Ten Cate, Olle
2018-02-09
To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students' illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=-0.006, p=NS). The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability.
ERIC Educational Resources Information Center
Repenning, Alexander; Webb, David C.; Koh, Kyu Han; Nickerson, Hilarie; Miller, Susan B.; Brand, Catharine; Her Many Horses, Ian; Basawapatna, Ashok; Gluck, Fred; Grover, Ryan; Gutierrez, Kris; Repenning, Nadia
2015-01-01
An educated citizenry that participates in and contributes to science technology engineering and mathematics innovation in the 21st century will require broad literacy and skills in computer science (CS). School systems will need to give increased attention to opportunities for students to engage in computational thinking and ways to promote a…
ERIC Educational Resources Information Center
Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.
2014-01-01
Based on a subsample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this article reports data from a follow-up assessment at age 18 years on the antecedents of "secure base script knowledge", as reflected in the ability to generate narratives in which attachment-related difficulties are…
Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco
2014-05-01
The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.
Data Curation and Visualization for MuSIASEM Analysis of the Nexus
NASA Astrophysics Data System (ADS)
Renner, Ansel
2017-04-01
A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.
Computational Thinking Patterns
ERIC Educational Resources Information Center
Ioannidou, Andri; Bennett, Vicki; Repenning, Alexander; Koh, Kyu Han; Basawapatna, Ashok
2011-01-01
The iDREAMS project aims to reinvent Computer Science education in K-12 schools, by using game design and computational science for motivating and educating students through an approach we call Scalable Game Design, starting at the middle school level. In this paper we discuss the use of Computational Thinking Patterns as the basis for our…
dREL: a relational expression language for dictionary methods.
Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R
2012-08-27
The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
A Scalable Qubit Architecture Based on Holes in Quantum Dot Molecules
2012-09-26
94, 57402 (2005). 19 E. A. Stinaff, M. Scheibner, A. S. Bracker, I. V. Pono- marev, V. L. Korenev , M. E. Ware, M. F. Doty, T. L. Reinecke, and D...Gammon, Science 311, 636 (2006). 20 M. F. Doty, M. Scheibner, I. V. Ponomarev, E. A. Sti- naff, A. S. Bracker, V. L. Korenev , T. L. Reinecke, and D...Gammon, Physical Review Letters 97, 197202 (2006). 21 M. Scheibner, M. Doty, I. Ponomarev, A. Bracker, E. Sti- naff, V. Korenev , T. Reinecke, and D
OpenSesame: an open-source, graphical experiment builder for the social sciences.
Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan
2012-06-01
In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
A case study: The original intentions of the designers of the science content standards
NASA Astrophysics Data System (ADS)
Eucker, Penelope Hudson
This case study research examined the original intentions of the designers of the science content standards in the historical context of educational reforms and legislation. The content standards are the keystone of standards-based education. Originally, national science content standards were part of a cohesive program to increase the occurrence of quality science K--12. Through assessment policies set into motion by state and federal legislation, science curriculum is increasingly fixed and standardized. Scripting teachers is becoming more common. Unintended outcomes of standards-based education are prevalent in all classrooms. Recording the original intentions of the designers of the science content standards in a historical context is significant to document their beliefs and purposes. The shared beliefs of the six scholars included: (a) science had become overstuffed curriculum with students learning very few concepts; (b) science teachers required assistance to decide which concepts are most important for students to learn; (c) standards-based education will most likely endure for a very long time; (d) science is a specific way of knowing and inquiry must be part of science instruction; (e) few teachers teach to the science content standards. The scholars disagreed about whether the power to decide what to teach had moved from the classroom to the legislators and if standards-based education has preferentially helped some groups of students while diminishing the science education of others. Implications from the findings reveal the tension between a defined science content and the resultant assessment template that further trims the instructional range offered. Foreshadowing of increasing trend toward profits made from testing companies as state and federal legislation increase mandated assessments. Significantly, the educational research that clearly demonstrate many pathways lead to educated students such as the Eight-Year Study were suppressed in favor of the bi-partisan supported standards-based education. One of the stated goals of standards-based education was equity. With documented corrupted curriculum sometimes devoid of all science, equity remains an elusive goal. This research documents the original intentions of the designers of the science content standards. The story continues to unfold with new state and federal legislation as teachers attempt to teach the mandated content standards.
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
Effects of script-based role play in cardiopulmonary resuscitation team training.
Chung, Sung Phil; Cho, Junho; Park, Yoo Seok; Kang, Hyung Goo; Kim, Chan Woong; Song, Keun Jeong; Lim, Hoon; Cho, Gyu Chong
2011-08-01
The purpose of this study is to compare the cardiopulmonary resuscitation (CPR) team dynamics and performance between a conventional simulation training group and a script-based training group. This was a prospective randomised controlled trial of educational intervention for CPR team training. Fourteen teams, each consisting of five members, were recruited. The conventional group (C) received training using a didactic lecture and simulation with debriefing, while the script group (S) received training using a resuscitation script. The team activity was evaluated with checklists both before and after 1 week of training. The videotaped simulated resuscitation events were compared in terms of team dynamics and performance aspects. Both groups showed significantly higher leadership scores after training (C: 58.2 ± 9.2 vs. 67.2 ± 9.5, p=0.007; S: 57.9 ± 8.1 vs. 65.4 ± 12.1, p=0.034). However, there were no significant improvements in performance scores in either group after training. There were no differences in the score improvement after training between the two groups in dynamics (C: 9.1 ± 12.6 vs. S: 7.4 ± 13.7, p=0.715), performance (C: 5.5 ± 11.4 vs. S: 4.7 ± 9.6, p=0.838) and total scores (C: 14.6 ± 20.1 vs. S: 12.2 ± 19.5, p=0.726). Script-based CPR team training resulted in comparable improvements in team dynamics scores compared with conventional simulation training. Resuscitation scripts may be used as an adjunct for CPR team training.
ERIC Educational Resources Information Center
Groh, Ashley M.; Roisman, Glenn I.
2009-01-01
This article examines the extent to which secure base script knowledge--as reflected in an adult's ability to generate narratives in which attachment-related threats are recognized, competent help is provided, and the problem is resolved--is associated with adults' autonomic and subjective emotional responses to infant distress and nondistress…
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay
1987-01-01
The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.
History of Science and Science Museums
NASA Astrophysics Data System (ADS)
Faria, Cláudia; Guilherme, Elsa; Gaspar, Raquel; Boaventura, Diana
2015-10-01
The activities presented in this paper, which are addressed to elementary school, are focused on the pioneering work of the Portuguese King Carlos I in oceanography and involve the exploration of the exhibits belonging to two different science museums, the Aquarium Vasco da Gama and the Maritime Museum. Students were asked to study fish adaptations to deep sea, through the exploration of a fictional story, based on historical data and based on the work of the King that served as a guiding script for all the subsequent tasks. In both museums, students had access to: historical collections of organisms, oceanographic biological sampling instruments, fish gears and ships. They could also observe the characteristics and adaptations of diverse fish species characteristic of deep sea. The present study aimed to analyse the impact of these activities on students' scientific knowledge, on their understanding of the nature of science and on the development of transversal skills. All students considered the project very popular. The results obtained suggest that the activity promoted not only the understanding of scientific concepts, but also stimulated the development of knowledge about science itself and the construction of scientific knowledge, stressing the relevance of creating activities informed by the history of science. As a final remark we suggest that the partnership between elementary schools and museums should be seen as an educational project, in which the teacher has to assume a key mediating role between the school and the museums.
Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi
2014-03-27
This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less
NASA Astrophysics Data System (ADS)
Waldman, Amy Sue
I. Protein structure is not easily predicted from the linear sequence of amino acids. An increased ability to create protein structures would allow researchers to develop new peptide-based therapeutics and materials, and would provide insights into the mechanisms of protein folding. Toward this end, we have designed and synthesized two-stranded antiparallel beta-sheet mimics containing conformationally biased scaffolds and semicarbazide, urea, and hydrazide linker groups that attach peptide chains to the scaffold. The mimics exhibited populations of intramolecularly hydrogen-bonded beta-sheet-like conformers as determined by spectroscopic techniques such as FTIR, sp1H NMR, and ROESY studies. During our studies, we determined that a urea-hydrazide beta-strand mimic was able to tightly hydrogen bond to peptides in an antiparallel beta-sheet-like configuration. Several derivatives of the urea-hydrazide beta-strand mimic were synthesized. Preliminary data by electron microscopy indicate that the beta-strand mimics have an effect on the folding of Alzheimer's Abeta peptide. These data suggest that the urea-hydrazide beta-strand mimics and related compounds may be developed into therapeutics which effect the folding of the Abeta peptide into neurotoxic aggregates. II. In recent years, there has been concern about the low level of science literacy and science interest among Americans. A declining interest in science impacts the abilities of people to make informed decisions about technology. To increase the interest in science among secondary students, we have developed the UCI Chemistry Outreach Program to High Schools. The Program features demonstration shows and discussions about chemistry in everyday life. The development and use of show scripts has enabled large numbers of graduate and undergraduate student volunteers to demonstrate chemistry to more than 12,000 local high school students. Teachers, students, and volunteers have expressed their enjoyment of The UCI Chemistry Outreach Program to High Schools.
Communicating Ocean Acidification
ERIC Educational Resources Information Center
Pope, Aaron; Selna, Elizabeth
2013-01-01
Participation in a study circle through the National Network of Ocean and Climate Change Interpretation (NNOCCI) project enabled staff at the California Academy of Sciences to effectively engage visitors on climate change and ocean acidification topics. Strategic framing tactics were used as staff revised the scripted Coral Reef Dive program,…
Agricultural Science Protects Our Environment.
ERIC Educational Resources Information Center
1967
Included are a 49 frame filmstrip and a script for narrating a presentation. The presentation is aimed at the secondary school level with an emphasis on how agricultural scientists investigate problems in farmland erosion, stream pollution, road building erosion problems, air pollution, farm pollution, pesticides, and insect control by biological…
Toward a Science of Cooperation.
ERIC Educational Resources Information Center
Newbern, Dianna; And Others
Scripted cooperative learning and individual learning of descriptive information were compared in a 2-x-2 factorial design with 104 undergraduates. Influenced by models of individual learning and cognition, differences were assessed in (1) information acquisition and retrieval, (2) the quality and quantity of recalled information, and (3) the…
Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing
NASA Astrophysics Data System (ADS)
Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.
2011-12-01
Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.
Design of 3D simulation engine for oilfield safety training
NASA Astrophysics Data System (ADS)
Li, Hua-Ming; Kang, Bao-Sheng
2015-03-01
Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.
ERIC Educational Resources Information Center
Umemura, Tomotaka; Watanabe, Manami; Tazuke, Kohei; Asada-Hirano, Shintaro; Kudo, Shimpei
2018-01-01
The universality of secure base construct, which suggests that one's use of an attachment figure as a secure base from which to explore the environment is an evolutionary outcome, is one of the core ideas of attachment theory. However, this universality idea has been critiqued because exploration is not as valued in Japanese culture as it is in…
Toward a Data Scalable Solution for Facilitating Discovery of Science Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jesse R.; Castellana, Vito G.; Morari, Alessandro
Science is increasingly motivated by the need to process larger quantities of data. It is facing severe challenges in data collection, management, and processing, so much so that the computational demands of “data scaling” are competing with, and in many fields surpassing, the traditional objective of decreasing processing time. Example domains with large datasets include astronomy, biology, genomics, climate/weather, and material sciences. This paper presents a real-world use case in which we wish to answer queries pro- vided by domain scientists in order to facilitate discovery of relevant science resources. The problem is that the metadata for these science resourcesmore » is very large and is growing quickly, rapidly increasing the need for a data scaling solution. We propose a system – SGEM – designed for answering graph-based queries over large datasets on cluster architectures, and we re- port performance results for queries on the current RDESC dataset of nearly 1.4 billion triples, and on the well-known BSBM SPARQL query benchmark.« less
Integrated generation of complex optical quantum states and their coherent control
NASA Astrophysics Data System (ADS)
Roztocki, Piotr; Kues, Michael; Reimer, Christian; Romero Cortés, Luis; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2018-01-01
Complex optical quantum states based on entangled photons are essential for investigations of fundamental physics and are the heart of applications in quantum information science. Recently, integrated photonics has become a leading platform for the compact, cost-efficient, and stable generation and processing of optical quantum states. However, onchip sources are currently limited to basic two-dimensional (qubit) two-photon states, whereas scaling the state complexity requires access to states composed of several (<2) photons and/or exhibiting high photon dimensionality. Here we show that the use of integrated frequency combs (on-chip light sources with a broad spectrum of evenly-spaced frequency modes) based on high-Q nonlinear microring resonators can provide solutions for such scalable complex quantum state sources. In particular, by using spontaneous four-wave mixing within the resonators, we demonstrate the generation of bi- and multi-photon entangled qubit states over a broad comb of channels spanning the S, C, and L telecommunications bands, and control these states coherently to perform quantum interference measurements and state tomography. Furthermore, we demonstrate the on-chip generation of entangled high-dimensional (quDit) states, where the photons are created in a coherent superposition of multiple pure frequency modes. Specifically, we confirm the realization of a quantum system with at least one hundred dimensions. Moreover, using off-the-shelf telecommunications components, we introduce a platform for the coherent manipulation and control of frequencyentangled quDit states. Our results suggest that microcavity-based entangled photon state generation and the coherent control of states using accessible telecommunications infrastructure introduce a powerful and scalable platform for quantum information science.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
NASA Astrophysics Data System (ADS)
Chen, Chui-Zhen; Xie, Ying-Ming; Liu, Jie; Lee, Patrick A.; Law, K. T.
2018-03-01
Quantum anomalous Hall insulator/superconductor heterostructures emerged as a competitive platform to realize topological superconductors with chiral Majorana edge states as shown in recent experiments [He et al. Science 357, 294 (2017), 10.1126/science.aag2792]. However, chiral Majorana modes, being extended, cannot be used for topological quantum computation. In this work, we show that quasi-one-dimensional quantum anomalous Hall structures exhibit a large topological regime (much larger than the two-dimensional case) which supports localized Majorana zero energy modes. The non-Abelian properties of a cross-shaped quantum anomalous Hall junction is shown explicitly by time-dependent calculations. We believe that the proposed quasi-one-dimensional quantum anomalous Hall structures can be easily fabricated for scalable topological quantum computation.
Architecture, Design, and Development of an HTML/JavaScript Web-Based Group Support System.
ERIC Educational Resources Information Center
Romano, Nicholas C., Jr.; Nunamaker, Jay F., Jr.; Briggs, Robert O.; Vogel, Douglas R.
1998-01-01
Examines the need for virtual workspaces and describes the architecture, design, and development of GroupSystems for the World Wide Web (GSWeb), an HTML/JavaScript Web-based Group Support System (GSS). GSWeb, an application interface similar to a Graphical User Interface (GUI), is currently used by teams around the world and relies on user…
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
Trumbell, Jill M; Hibel, Leah C; Mercado, Evelyn; Posada, Germán
2018-06-21
The current study examines associations between marital conflict and negative parenting behaviors among fathers and mothers, and the extent to which internal working models (IWMs) of attachment relationships may serve as sources of risk or resilience during family interactions. The sample consisted of 115 families (mothers, fathers, and their 6-month-old infants) who participated in a controlled experiment. Couples were randomly assigned to engage in either a conflict or positive marital discussion, followed by parent-infant freeplay sessions and assessment of parental IWMs of attachment (i.e., secure base script knowledge). While no differences in parenting behaviors emerged between the conflict and positive groups, findings revealed that couple withdrawal during the marital discussion was related to more intrusive and emotionally disengaged parenting for mothers and fathers. Interestingly, secure base script knowledge was inversely related to intrusion and emotional disengagement for fathers, but not for mothers. Furthermore, only among fathers did secure base script knowledge serve to significantly buffer the impact of marital disengagement on negative parenting (emotional disengagement). Findings are discussed using a family systems framework and expand our understanding of families, and family members, at risk. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less
Data Visualization Using Immersive Virtual Reality Tools
NASA Astrophysics Data System (ADS)
Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.
2013-01-01
The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.
Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H
2014-05-28
Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.
Using the Generic Mapping Tools From Within the MATLAB, Octave and Julia Computing Environments
NASA Astrophysics Data System (ADS)
Luis, J. M. F.; Wessel, P.
2016-12-01
The Generic Mapping Tools (GMT) is a widely used software infrastructure tool set for analyzing and displaying geoscience data. Its power to analyze and process data and produce publication-quality graphics has made it one of several standard processing toolsets used by a large segment of the Earth and Ocean Sciences. GMT's strengths lie in superior publication-quality vector graphics, geodetic-quality map projections, robust data processing algorithms scalable to enormous data sets, and ability to run under all common operating systems. The GMT tool chest offers over 120 modules sharing a common set of command options, file structures, and documentation. GMT modules are command line tools that accept input and write output, and this design allows users to write scripts in which one module's output becomes another module's input, creating highly customized GMT workflows. With the release of GMT 5, these modules are high-level functions with a C API, potentially allowing users access to high-level GMT capabilities from any programmable environment. Many scientists who use GMT also use other computational tools, such as MATLAB® and its clone Octave. We have built a MATLAB/Octave interface on top of the GMT 5 C API. Thus, MATLAB or Octave now has full access to all GMT modules as well as fundamental input/output of GMT data objects via a MEX function. Internally, the GMT/MATLAB C API defines six high-level composite data objects that handle input and output of data via individual GMT modules. These are data tables, grids, text tables (text/data mixed records), color palette tables, raster images (1-4 color bands), and PostScript. The API is responsible for translating between the six GMT objects and the corresponding native MATLAB objects. References to data arrays are passed if transposing of matrices is not required. The GMT and MATLAB/Octave combination is extremely flexible, letting the user harvest the general numerical and graphical capabilities of both systems, and represents a giant step forward in interoperability between GMT and other software package. We will present examples of the symbiotic benefits of combining these platforms. Two other extensions are also in the works: a nearly finished Julia wrapper and an embryonic Python module. Publication supported by FCT- project UID/GEO/50019/2013 - Instituto D. Luiz
[About healing with nature and about love for the patient].
Schipperges, H
1994-03-29
In the light of a nature-centered philosophy and its image of man and universe, the medical science of Paracelsus appears 'enlightened by nature'. Based on this experience, the physician found his medical art on the 'four pillars' ('Philosophia', 'Astronomia', 'Alchimia', 'Physica'). Acting out of the healing power of nature would be incomplete, if 'Physica' would not be accompanied by 'Virtus', the ethical component of all medical actions. This is described by Paracelsus by the picture of mercy ('Barmherzigkeit'), in whose sentiment all medical acting finds its final motivation. Therefore, the script concentrates in particular on what Paracelsus describes as 'love for the patient'.
Acquisition and Maintenance of Scripts in Aphasia: A Comparison of Two Cuing Conditions
Cherney, Leora R.; Kaye, Rosalind C.; van Vuuren, Sarel
2014-01-01
Purpose This study was designed to compare acquisition and maintenance of scripts under two conditions: High Cue which provided numerous multimodality cues designed to minimize errors, and Low Cue which provided minimal cues. Methods In a randomized controlled cross-over study, eight individuals with chronic aphasia received intensive computer-based script training under two cuing conditions. Each condition lasted three weeks, with a three-week washout period. Trained and untrained scripts were probed for accuracy and rate at baseline, during treatment, immediately post-treatment, and at three and six weeks post-treatment. Significance testing was conducted on gain scores and effect sizes were calculated. Results Training resulted in significant gains in script acquisition with maintenance of skills at three and six weeks post-treatment. Differences between cuing conditions were not significant. When severity of aphasia was considered, there also were no significant differences between conditions, although magnitude of change was greater in the High Cue condition versus the Low Cue condition for those with more severe aphasia. Conclusions Both cuing conditions were effective in acquisition and maintenance of scripts. The High Cue condition may be advantageous for those with more severe aphasia. Findings support the clinical use of script training and importance of considering aphasia severity. PMID:24686911
Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.
Bhattacharya, Ujjwal; Chaudhuri, B B
2009-03-01
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
An Idealized, Single Radial Swirler, Lean-Direct-Injection (LDI) Concept Meshing Script
NASA Technical Reports Server (NTRS)
Iannetti, Anthony C.; Thompson, Daniel
2008-01-01
To easily study combustor design parameters using computational fluid dynamics codes (CFD), a Gridgen Glyph-based macro (based on the Tcl scripting language) dubbed BladeMaker has been developed for the meshing of an idealized, single radial swirler, lean-direct-injection (LDI) combustor. BladeMaker is capable of taking in a number of parameters, such as blade width, blade tilt with respect to the perpendicular, swirler cup radius, and grid densities, and producing a three-dimensional meshed radial swirler with a can-annular (canned) combustor. This complex script produces a data format suitable for but not specific to the National Combustion Code (NCC), a state-of-the-art CFD code developed for reacting flow processes.
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions
NASA Technical Reports Server (NTRS)
He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.
2011-01-01
A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.
The effect of written text on comprehension of spoken English as a foreign language.
Diao, Yali; Chandler, Paul; Sweller, John
2007-01-01
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
Texture for script identification.
Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha
2005-11-01
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.
Chen, Cory K; Waters, Harriet Salatas; Hartman, Marilyn; Zimmerman, Sheryl; Miklowitz, David J; Waters, Everett
2013-01-01
This study explores links between adults' attachment representations and the task of caring for elderly parents with dementia. Participants were 87 adults serving as primary caregivers of a parent or parent-in-law with dementia. Waters and Waters' ( 2006 ) Attachment Script Assessment was adapted to assess script-like attachment representation in the context of caring for their elderly parent. The quality of adult-elderly parent interactions was assessed using the Level of Expressed Emotions Scale (Cole & Kazarian, 1988 ) and self-report measures of caregivers' perception of caregiving as difficult. Caregivers' secure base script knowledge predicted lower levels of negative expressed emotion. This effect was moderated by the extent to which participants experienced caring for elderly parents as difficult. Attachment representations played a greater role in caregiving when caregiving tasks were perceived as more difficult. These results support the hypothesis that attachment representations influence the quality of care that adults provide their elderly parents. Clinical implications are discussed.
galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.
Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M
2004-06-12
The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se
AFRL Research in Plasma-Assisted Combustion
2013-10-23
Scramjet propulsion Non-equilibrium flows Diagnostics for scramjet controls Boundary-layer transition Structural sciences for...hypersonic vehicles Computational sciences for hypersonic flight 3 DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution Overview Research...within My Division HIFiRE-5 Vehicle Launched 23 April 2012 can payload transition section Orion S-30 Focus on hypersonic flight: scalability
ERIC Educational Resources Information Center
Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.
2015-01-01
In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions--University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally…
Building Scalable Knowledge Graphs for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick; Zhang, Jia; Duan, Xiaoyi; Miller, J. J.; Bugbee, Kaylin; Christopher, Sundar; Freitag, Brian
2017-01-01
Knowledge Graphs link key entities in a specific domain with other entities via relationships. From these relationships, researchers can query knowledge graphs for probabilistic recommendations to infer new knowledge. Scientific papers are an untapped resource which knowledge graphs could leverage to accelerate research discovery. Goal: Develop an end-to-end (semi) automated methodology for constructing Knowledge Graphs for Earth Science.
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran.
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations' performance and is carried out in the form of teamwork called participatory research. Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research.
NASA Astrophysics Data System (ADS)
Schechinger, Linda Sue
I. To investigate the delivery of nucleotide-based drugs, we are studying molecular recognition of nucleotide derivatives in environments that are similar to cell membranes. The Nowick group previously discovered that membrane-like surfactant micelles tetradecyltrimethylammonium bromide (TTAB) micelle facilitate molecular of adenosine monophosphate (AMP) recognition. The micelles bind nucleotides by means of electrostatic interactions and hydrogen bonding. We observed binding by following 1H NMR chemical shift changes of unique hexylthymine protons upon addition of AMP. Cationic micelles are required for binding. In surfactant-free or sodium dodecylsulfate solutions, no hydrogen bonding is observed. These observations suggest that the cationic surfactant headgroups bind the nucleotide phosphate group, while the intramicellar base binds the nucleotide base. The micellar system was optimized to enhance binding and selectivity for adenosine nucleotides. The selectivity for adenosine and the number of phosphate groups attached to the adenosine were both investigated. Addition of cytidine, guanidine, or uridine monophosphates, results in no significant downfield shifting of the NH resonance. Selectivity for the phosphate is limited, since adenosine mono-, di-, and triphosphates all have similar binding constants. We successfully achieved molecular recognition of adenosine nucleotides in micellar environments. There is significant difference in the binding interactions between the adenosine nucleotides and three other natural nucleotides. II. The UCI Chemistry Outreach Program (UCICOP) addresses the declining interest of the nations youth for science. UCICOP brings fun and exciting chemistry experiments to local high schools, to remind students that science is fun and has many practical uses. Volunteer students and alumni of UCI perform the demonstrations using scripts and material provided by UCICOP. The preparation of scripts and materials is done by two coordinators. These coordinators organize the program and provide continuity to the program. The success of UCICOP can be measured by the high praise and gratitude expressed by the teachers, students and volunteers.
HACC: Extreme Scaling and Performance Across Diverse Architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin
2013-11-01
Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.
Collins, Linda M.; Kugler, Kari C.; Gwadz, Marya Viorst
2015-01-01
To move society toward an AIDS-free generation, behavioral interventions for prevention and treatment of HIV/AIDS must be not only effective, but also cost-effective, efficient, and readily scalable. The purpose of this article is to introduce to the HIV/AIDS research community the multiphase optimization strategy (MOST), a new methodological framework inspired by engineering principles and designed to develop behavioral interventions that have these important characteristics. Many behavioral interventions comprise multiple components. In MOST, randomized experimentation is conducted to assess the individual performance of each intervention component, and whether its presence/absence/setting has an impact on the performance of other components. This information is used to engineer an intervention that meets a specific optimization criterion, defined a priori in terms of effectiveness, cost, cost-effectiveness, and/or scalability. MOST will enable intervention science to develop a coherent knowledge base about what works and does not work. Ultimately this will improve behavioral interventions systematically and incrementally. PMID:26238037
Collins, Linda M; Kugler, Kari C; Gwadz, Marya Viorst
2016-01-01
To move society toward an AIDS-free generation, behavioral interventions for prevention and treatment of HIV/AIDS must be not only effective, but also cost-effective, efficient, and readily scalable. The purpose of this article is to introduce to the HIV/AIDS research community the multiphase optimization strategy (MOST), a new methodological framework inspired by engineering principles and designed to develop behavioral interventions that have these important characteristics. Many behavioral interventions comprise multiple components. In MOST, randomized experimentation is conducted to assess the individual performance of each intervention component, and whether its presence/absence/setting has an impact on the performance of other components. This information is used to engineer an intervention that meets a specific optimization criterion, defined a priori in terms of effectiveness, cost, cost-effectiveness, and/or scalability. MOST will enable intervention science to develop a coherent knowledge base about what works and does not work. Ultimately this will improve behavioral interventions systematically and incrementally.
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
High performance data transfer
NASA Astrophysics Data System (ADS)
Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.
2017-10-01
The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.
Speer, Stefan; Klein, Andreas; Kober, Lukas; Weiss, Alexander; Yohannes, Indra; Bert, Christoph
2017-08-01
Intensity-modulated radiotherapy (IMRT) techniques are now standard practice. IMRT or volumetric-modulated arc therapy (VMAT) allow treatment of the tumor while simultaneously sparing organs at risk. Nevertheless, treatment plan quality still depends on the physicist's individual skills, experiences, and personal preferences. It would therefore be advantageous to automate the planning process. This possibility is offered by the Pinnacle 3 treatment planning system (Philips Healthcare, Hamburg, Germany) via its scripting language or Auto-Planning (AP) module. AP module results were compared to in-house scripts and manually optimized treatment plans for standard head and neck cancer plans. Multiple treatment parameters were scored to judge plan quality (100 points = optimum plan). Patients were initially planned manually by different physicists and re-planned using scripts or AP. Script-based head and neck plans achieved a mean of 67.0 points and were, on average, superior to manually created (59.1 points) and AP plans (62.3 points). Moreover, they are characterized by reproducibility and lower standard deviation of treatment parameters. Even less experienced staff are able to create at least a good starting point for further optimization in a short time. However, for particular plans, experienced planners perform even better than scripts or AP. Experienced-user input is needed when setting up scripts or AP templates for the first time. Moreover, some minor drawbacks exist, such as the increase of monitor units (+35.5% for scripted plans). On average, automatically created plans are superior to manually created treatment plans. For particular plans, experienced physicists were able to perform better than scripts or AP; thus, the benefit is greatest when time is short or staff inexperienced.
The Emergence of Agent-Based Technology as an Architectural Component of Serious Games
NASA Technical Reports Server (NTRS)
Phillips, Mark; Scolaro, Jackie; Scolaro, Daniel
2010-01-01
The evolution of games as an alternative to traditional simulations in the military context has been gathering momentum over the past five years, even though the exploration of their use in the serious sense has been ongoing since the mid-nineties. Much of the focus has been on the aesthetics of the visuals provided by the core game engine as well as the artistry provided by talented development teams to produce not only breathtaking artwork, but highly immersive game play. Consideration of game technology is now so much a part of the modeling and simulation landscape that it is becoming difficult to distinguish traditional simulation solutions from game-based approaches. But games have yet to provide the much needed interactive free play that has been the domain of semi-autonomous forces (SAF). The component-based middleware architecture that game engines provide promises a great deal in terms of options for the integration of agent solutions to support the development of non-player characters that engage the human player without the deterministic nature of scripted behaviors. However, there are a number of hard-learned lessons on the modeling and simulation side of the equation that game developers have yet to learn, such as: correlation of heterogeneous systems, scalability of both terrain and numbers of non-player entities, and the bi-directional nature of simulation to game interaction provided by Distributed Interactive Simulation (DIS) and High Level Architecture (HLA).
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John M.
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21PstP thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the 'Southwest Effect'). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the Southwest Effect). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Scalability of dark current in silicon PIN photodiode
NASA Astrophysics Data System (ADS)
Feng, Ya-Jie; Li, Chong; Liu, Qiao-Li; Wang, Hua-Qiang; Hu, An-Qi; He, Xiao-Ying; Guo, Xia
2018-04-01
Not Available Project supported by the National Key Research and Development Program of China (Grant No. 2017YFF0104801) and the National Natural Science Foundation of China (Grant Nos. 61335004, 61675046, and 61505003).
Globus Nexus: A Platform-as-a-Service Provider of Research Identity, Profile, and Group Management.
Chard, Kyle; Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian
2016-03-01
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.
Globus Nexus: A Platform-as-a-Service provider of research identity, profile, and group management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; Lidman, Mattias; McCollam, Brendan
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus,more » describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.« less
Globus Nexus: A Platform-as-a-Service Provider of Research Identity, Profile, and Group Management
Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian
2015-01-01
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability. PMID:26688598
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Early Market Site Identification Data
Levi Kilcher
2016-04-01
This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).
HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.
Bharath, A; Madhvanath, Sriganesh
2012-04-01
Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.
Unit: Petroleum, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
This is a National Trial Print of a unit on petroleum developed for the Australian Science Education Project. The package contains the teacher's edition of the written material and a script for a film entitled "The Extraordinary Experience of Nicholas Nodwell" emphasizing the uses of petroleum and petroleum products in daily life and…
Characteristics of Transverse and Longitudinal Waves.
ERIC Educational Resources Information Center
Reister, W. A.
This monograph presents an autoinstructional program in the physical sciences. It is considered useful at the higher, middle and lower high school levels. Three behavioral objectives are listed and a time allotment of 35-40 minutes is suggested. A bibliography is included. A script, incorporating the use of a cassette player and slides, is used by…
ECommerce: Meeting the Needs of Local Business with Cross-Departmental Education.
ERIC Educational Resources Information Center
Sagi, John P.
This document offers a brief introduction to electronic commerce (known as eCommerce) and explains the challenges and frustrations of developing a course around the topic. ECommerce blends elements of computer science (HTML and JavaScript programming, for example) with traditional business functions, such as marketing, salesmanship, finance and…
Kwon, Bomjun J
2012-06-01
This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community-both those with programming backgrounds and those without.
ERIC Educational Resources Information Center
Hwu, Fenfang
2013-01-01
Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…
ERIC Educational Resources Information Center
Ishimaru, Ann M.; Takahashi, Sola
2017-01-01
Partnerships between teachers and parents from nondominant communities hold promise for reducing race- and class-based educational disparities, but the ways families and teachers work together often fall short of delivering systemic change. Racialized institutional scripts provide "taken-for-granted" norms, expectations, and assumptions…
Supporting Component-Based Courseware Development Using Virtual Apparatus Framework Script.
ERIC Educational Resources Information Center
Ip, Albert; Fritze, Paul
This paper reports on the latest development of the Virtual Apparatus (VA) framework, a contribution to efforts at the University of Melbourne (Australia) to mainstream content and pedagogical functions of curricula. The integration of the educational content and pedagogical functions of learning components using an XML compatible script,…
ERIC Educational Resources Information Center
Pleguezuelos, E. M.; Hornos, E.; Dory, V.; Gagnon, R.; Malagrino, P.; Brailovsky, C. A.; Charlin, B.
2013-01-01
Context: The PRACTICUM Institute has developed large-scale international programs of on-line continuing professional development (CPD) based on self-testing and feedback using the Practicum Script Concordance Test© (PSCT). Aims: To examine the psychometric consequences of pooling the responses of panelists from different countries (composite…
Promoting Critical, Elaborative Discussions through a Collaboration Script and Argument Diagrams
ERIC Educational Resources Information Center
Scheuer, Oliver; McLaren, Bruce M.; Weinberger, Armin; Niebuhr, Sabine
2014-01-01
During the past two decades a variety of approaches to support argumentation learning in computer-based learning environments have been investigated. We present an approach that combines argumentation diagramming and collaboration scripts, two methods successfully used in the past individually. The rationale for combining the methods is to…
Enhancing AFLOW Visualization using Jmol
NASA Astrophysics Data System (ADS)
Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration
The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings
SoS Notebook: An Interactive Multi-Language Data Analysis Environment.
Peng, Bo; Wang, Gao; Ma, Jun; Leong, Man Chong; Wakefield, Chris; Melott, James; Chiu, Yulun; Du, Di; Weinstein, John N
2018-05-22
Complex bioinformatic data analysis workflows involving multiple scripts in different languages can be difficult to consolidate, share, and reproduce. An environment that streamlines the entire processes of data collection, analysis, visualization and reporting of such multi-language analyses is currently lacking. We developed Script of Scripts (SoS) Notebook, a web-based notebook environment that allows the use of multiple scripting language in a single notebook, with data flowing freely within and across languages. SoS Notebook enables researchers to perform sophisticated bioinformatic analysis using the most suitable tools for different parts of the workflow, without the limitations of a particular language or complications of cross-language communications. SoS Notebook is hosted at http://vatlab.github.io/SoS/ and is distributed under a BSD license. bpeng@mdanderson.org.
Spatial Distribution of Star Formation in High Redshift Galaxies
NASA Astrophysics Data System (ADS)
Cunnyngham, Ian; Takamiya, M.; Willmer, C.; Chun, M.; Young, M.
2011-01-01
Integral field unit spectroscopy taken of galaxies with redshifts between 0.6 and 0.8 utilizing Gemini Observatory’s GMOS instrument were used to investigate the spatial distribution of star-forming regions by measuring the Hβ and [OII]λ3727 emission line fluxes. These galaxies were selected based on the strength of Hβ and [OII]λ3727 as measured from slit LRIS/Keck spectra. The process of calibrating and reducing data into cubes -- possessing two spatial dimensions, and one for wavelength -- was automated via a custom batch script using the Gemini IRAF routines. Among these galaxies only the bluest sources clearly show [OII] in the IFU regardless of total galaxy luminosity. The brightest galaxies lack [OII] emission and it is posited that two different modes of star formation exist among this seemingly homogeneous group of z=0.7 star-forming galaxies. In order to increase the galaxy sample to include redshifts from 0.3 to 0.9, public Gemini IFU data are being sought. Python scripts were written to mine the Gemini Science Archive for candidate observations, cross-reference the target of these observations with information from the NASA Extragalactic Database, and then present the resultant database in sortable, searchable, cross-linked web-interface using Django to facilitate navigation. By increasing the sample, we expect to characterize these two different modes of star formation which could be high-redshift counterparts of the U/LIRGs and dwarf starburst galaxies like NGC 1569/NGC 4449. The authors acknowledge funds provided by the National Science Foundation (AST 0909240).
ERIC Educational Resources Information Center
Dibiase, David; Rademacher, Henry J.
2005-01-01
This article explores issues of scalability and sustainability in distance learning. The authors kept detailed records of time they spent teaching a course in geographic information science via the World Wide Web over a six-month period, during which class sizes averaged 49 students. The authors also surveyed students' satisfaction with the…
The Daniel K. Inouye College of Pharmacy Scripts
Pezzuto, John M; Ma, Carolyn SJ; Ma, Carolyn
2015-01-01
In partnership with the Hawai‘i Journal of Medicine & Public Health, the Daniel K. Inouye College of Pharmacy (DKICP) is pleased to provide Scripts on a regular basis. In the inaugural “Script,” a brief history of the profession in Hawai‘i was presented up to the founding of the DKICP, Hawai‘i's only academic pharmacy program. In this second part of the inaugural article, we describe some key accomplishments to date. The mission of the College is to educate pharmacy practitioners and leaders to serve as a catalyst for innovations and discoveries in pharmaceutical sciences and practice for promoting health and well-being, and to provide community service, including quality patient care. Examples are given to support the stated goals of the mission. With 341 graduates to date, and a 96% pass rate on the national licensing board exams, the college has played a significant role in improving healthcare in Hawai‘i and throughout the Pacific Region. Additionally, a PhD program with substantial research programs in both pharmacy practice and the pharmaceutical science has been launched. Considerable extramural funding has been garnered from organizations such as the National Institutes of Health and Centers for Medicare and Medicaid Services. The economic impact of the College is estimated to be over $50 million each year. With over 200 signed clinical affiliation agreements within the state as well as nationally and internationally, the DKICP has helped to ameliorate the shortage of pharmacists in the state, and has enhanced the profile and practice standard of the pharmacist's role on interprofessional health care teams. PMID:25821655
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
2012-09-24
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.
Bowleg, Lisa; Burkholder, Gary J; Noar, Seth M; Teti, Michelle; Malebranche, David J; Tschann, Jeanne M
2015-04-01
Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men's HIV risk.
An MPI-based MoSST core dynamics model
NASA Astrophysics Data System (ADS)
Jiang, Weiyuan; Kuang, Weijia
2008-09-01
Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.
Towards scalable quantum communication and computation: Novel approaches and realizations
NASA Astrophysics Data System (ADS)
Jiang, Liang
Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as candidates for naturally error-free quantum computation. We propose a scheme to unambiguously detect the anyonic statistics in spin lattice realizations using ultra-cold atoms in an optical lattice. We show how to reliably read and write topologically protected quantum memory using an atomic or photonic qubit.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
Designing EvoRoom: An Immersive Simulation Environment for Collective Inquiry in Secondary Science
NASA Astrophysics Data System (ADS)
Lui, Michelle Mei Yee
This dissertation investigates the design of complex inquiry for co-located students to work as a knowledge community within a mixed-reality learning environment. It presents the design of an immersive simulation called EvoRoom and corresponding collective inquiry activities that allow students to explore concepts around topics of evolution and biodiversity in a Grade 11 Biology course. EvoRoom is a room-sized simulation of a rainforest, modeled after Borneo in Southeast Asia, where several projected displays are stitched together to form a large, animated simulation on each opposing wall of the room. This serves to create an immersive environment in which students work collaboratively as individuals, in small groups and a collective community to investigate science topics using the simulations as an evidentiary base. Researchers and a secondary science teacher co-designed a multi-week curriculum that prepared students with preliminary ideas and expertise, then provided them with guided activities within EvoRoom, supported by tablet-based software as well as larger visualizations of their collective progress. Designs encompassed the broader curriculum, as well as all EvoRoom materials (e.g., projected displays, student tablet interfaces, collective visualizations) and activity sequences. This thesis describes a series of three designs that were developed and enacted iteratively over two and a half years, presenting key features that enhanced students' experiences within the immersive environment, their interactions with peers, and their inquiry outcomes. Primary research questions are concerned with the nature of effective design for such activities and environments, and the kinds of interactions that are seen at the individual, collaborative and whole-class levels. The findings fall under one of three themes: 1) the physicality of the room, 2) the pedagogical script for student observation and reflection and collaboration, and 3) ways of including collective visualizations in the activity. Discrete findings demonstrate how the above variables, through their design as inquiry components (i.e., activity, room, scripts and scaffolds on devices, collective visualizations), can mediate the students' interactions with one another, with their teacher, and impact the outcomes of their inquiry. A set of design recommendations is drawn from the results of this research to guide future design or research efforts.
Bowleg, Lisa; Burkholder, Gary J.; Noar, Seth M.; Teti, Michelle; Malebranche, David J.; Tschann, Jeanne M.
2014-01-01
Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men’s HIV risk. PMID:24311105
SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
Using Computing and Data Grids for Large-Scale Science and Engineering
NASA Technical Reports Server (NTRS)
Johnston, William E.
2001-01-01
We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.
Mathieu, Sylvain; Couderc, Marion; Glace, Baptiste; Tournadre, Anne; Malochet-Guinamand, Sandrine; Pereira, Bruno; Dubost, Jean-Jacques; Soubrier, Martin
2013-12-13
The script concordance test (SCT) is a method for assessing clinical reasoning of medical students by placing them in a context of uncertainty such as they will encounter in their future daily practice. Script concordance testing is going to be included as part of the computer-based national ranking examination (iNRE).This study was designed to create a script concordance test in rheumatology and use it for DCEM3 (fifth year) medical students administered via the online platform of the Clermont-Ferrand medical school. Our SCT for rheumatology teaching was constructed by a panel of 19 experts in rheumatology (6 hospital-based and 13 community-based). One hundred seventy-nine DCEM3 (fifth year) medical students were invited to take the test. Scores were computed using the scoring key available on the University of Montreal website. Reliability of the test was estimated by the Cronbach alpha coefficient for internal consistency. The test comprised 60 questions. Among the 26 students who took the test (26/179: 14.5%), 15 completed it in its entirety. The reference panel of rheumatologists obtained a mean score of 76.6 and the 15 students had a mean score of 61.5 (p = 0.001). The Cronbach alpha value was 0.82. An online SCT can be used as an assessment tool for medical students in rheumatology. This study also highlights the active participation of community-based rheumatologists, who accounted for the majority of the 19 experts in the reference panel.A script concordance test in rheumatology for 5th year medical students.
2010-01-01
Background An important focus of genomic science is the discovery and characterization of all functional elements within genomes. In silico methods are used in genome studies to discover putative regulatory genomic elements (called words or motifs). Although a number of methods have been developed for motif discovery, most of them lack the scalability needed to analyze large genomic data sets. Methods This manuscript presents WordSeeker, an enumerative motif discovery toolkit that utilizes multi-core and distributed computational platforms to enable scalable analysis of genomic data. A controller task coordinates activities of worker nodes, each of which (1) enumerates a subset of the DNA word space and (2) scores words with a distributed Markov chain model. Results A comprehensive suite of performance tests was conducted to demonstrate the performance, speedup and efficiency of WordSeeker. The scalability of the toolkit enabled the analysis of the entire genome of Arabidopsis thaliana; the results of the analysis were integrated into The Arabidopsis Gene Regulatory Information Server (AGRIS). A public version of WordSeeker was deployed on the Glenn cluster at the Ohio Supercomputer Center. Conclusion WordSeeker effectively utilizes concurrent computing platforms to enable the identification of putative functional elements in genomic data sets. This capability facilitates the analysis of the large quantity of sequenced genomic data. PMID:21210985
ERIC Educational Resources Information Center
Stender, Anita; Brückmann, Maja; Neumann, Knut
2017-01-01
This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson…
Mooney, Barbara Logan; Corrales, L René; Clark, Aurora E
2012-03-30
This work discusses scripts for processing molecular simulations data written using the software package R: A Language and Environment for Statistical Computing. These scripts, named moleculaRnetworks, are intended for the geometric and solvent network analysis of aqueous solutes and can be extended to other H-bonded solvents. New algorithms, several of which are based on graph theory, that interrogate the solvent environment about a solute are presented and described. This includes a novel method for identifying the geometric shape adopted by the solvent in the immediate vicinity of the solute and an exploratory approach for describing H-bonding, both based on the PageRank algorithm of Google search fame. The moleculaRnetworks codes include a preprocessor, which distills simulation trajectories into physicochemical data arrays, and an interactive analysis script that enables statistical, trend, and correlation analysis, and other data mining. The goal of these scripts is to increase access to the wealth of structural and dynamical information that can be obtained from molecular simulations. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nguyen, T.; Pankratius, V.; Eckman, L.; Seager, S.
2018-04-01
Debris disks around stars other than the Sun have received significant attention in studies of exoplanets, specifically exoplanetary system formation. Since debris disks are major sources of infrared emissions, infrared survey data such as the Wide-Field Infrared Survey (WISE) catalog potentially harbors numerous debris disk candidates. However, it is currently challenging to perform disk candidate searches for over 747 million sources in the WISE catalog due to the high probability of false positives caused by interstellar matter, galaxies, and other background artifacts. Crowdsourcing techniques have thus started to harness citizen scientists for debris disk identification since humans can be easily trained to distinguish between desired artifacts and irrelevant noises. With a limited number of citizen scientists, however, increasing data volumes from large surveys will inevitably lead to analysis bottlenecks. To overcome this scalability problem and push the current limits of automated debris disk candidate identification, we present a novel approach that uses citizen science results as a seed to train machine learning based classification. In this paper, we detail a case study with a computer-aided discovery pipeline demonstrating such feasibility based on WISE catalog data and NASA's Disk Detective project. Our approach of debris disk candidates classification was shown to be robust under a wide range of image quality and features. Our hybrid approach of citizen science with algorithmic scalability can facilitate big data processing for future detections as envisioned in future missions such as the Transiting Exoplanet Survey Satellite (TESS) and the Wide-Field Infrared Survey Telescope (WFIRST).
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
Predictors of Physical Altercation among Adolescents in Residential Substance Abuse Treatment
Crawley, Rachel D.; Becan, Jennifer Edwards; Knight, Danica Kalling; Joe, George W.; Flynn, Patrick M.
2014-01-01
This study tested the hypothesis that basic social information-processing components represented by family conflict, peer aggression, and pro-aggression cognitive scripts are related to aggression and social problems among adolescents in substance abuse treatment. The sample consisted of 547 adolescents in two community-based residential facilities. Correlation results indicated that more peer aggression is related to more pro-aggression scripts; scripts, peer aggression, and family conflict are associated with social problems; and in-treatment physical altercation involvement is predicted by higher peer aggression. Findings suggest that social information-processing components are valuable for treatment research. PMID:26622072
Using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.
2016-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
Emergent Theorisations in Modelling the Teaching of Two Science Teachers
ERIC Educational Resources Information Center
Monteiro, Rute; Carrillo, Jose; Aguaded, Santiago
2008-01-01
The main goal of this study is to understand the teacher's thoughts and action when he/she is immersed in the activity of teaching. To do so, it describes the procedures used to model two teachers' practice with respect to the topic of Plant Diversity. Starting from a consideration of the theoretical constructs of script, routine and…
Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library
ERIC Educational Resources Information Center
Karavirta, Ville; Shaffer, Clifford A.
2016-01-01
Data Structures and Algorithms are a central part of Computer Science. Due to their abstract and dynamic nature, they are a difficult topic to learn for many students. To alleviate these learning difficulties, instructors have turned to algorithm visualizations (AV) and AV systems. Research has shown that especially engaging AVs can have an impact…
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
Krahé, Barbara; Bieneck, Steffen; Scheinberger-Olwig, Renate
2007-11-01
The characteristic features of adolescents' sexual scripts were explored in 400 tenth and eleventh graders from Berlin, Germany. Participants rated the prototypical elements of three scripts for heterosexual interactions: (1) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to adolescents in general (general script); (2) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to themselves personally (individual script); and (3) the script for a nonconsensual sexual intercourse (rape script). Compared with the general script for the age group as a whole, the individual script contained fewer risk elements related to sexual aggression and portrayed more positive consequences of the sexual interaction. Few gender differences were found, and coital experience did not affect sexual scripts. The rape script was found to be close to the "real rape stereotype." The findings are discussed with respect to the role of sexual scripts as guidelines for behavior, particularly in terms of their significance for the prediction of sexual aggression.
NASA Astrophysics Data System (ADS)
Adler, D. S.
2000-12-01
The Science Planning and Scheduling Team (SPST) of the Space Telescope Science Institute (STScI) has historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SPST is in the process of transitioning to Unix operations. In the summer of 1999, SPST selected Python as the primary scripting language for the operational tools and began translation of the VMS DCL code. As of October 2000, SPST has installed a utility library of 16 modules consisting of 8000 lines of code and 80 Python tools consisting of 13000 lines of code. All tasks related to calendar generation have been switched to Unix operations. Current work focuses on translating the tools used to generate the Science Mission Specifications (SMS). The software required to generate the Mission Schedule and Command Loads (PASS), maintained by another team at STScI, will take longer to translate than the rest of the SPST operational code. SPST is planning on creating tools to access PASS from Unix in the short term. We are on schedule to complete the work needed to fully transition SPST to Unix operations (while remotely accessing PASS on VMS) by the fall of 2001.
Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.
Muin, Michael; Fontelo, Paul
2006-11-03
The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.
Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.
2015-01-01
Based on a sub-sample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this paper reports data from a follow-up assessment at age 18 years on the antecedents of secure base script knowledge, as reflected in the ability to generate narratives in which attachment-related difficulties are recognized, competent help is provided, and the problem is resolved. Secure base script knowledge was (a) modestly to moderately correlated with more well established assessments of adult attachment, (b) associated with mother-child attachment in the first three years of life and with observations of maternal and paternal sensitivity from childhood to adolescence, and (c) partially accounted for associations previously documented in the SECCYD cohort between early caregiving experiences and Adult Attachment Interview states of mind (Booth-LaForce & Roisman, 2014) as well as self-reported attachment styles (Fraley, Roisman, Booth-LaForce, Owen, & Holland, 2013). PMID:25264703
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm
NASA Astrophysics Data System (ADS)
Kumar, Chandan Jyoti; Kalita, Sanjib Kr.
Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.
Nanotechnology: A Policy Primer
2008-05-20
of oil.3 ! Universal access to clean water. Nanotechnology water desalination and filtration systems may offer affordable, scalable, and portable...Order Code RL34511 Nanotechnology : A Policy Primer May 20, 2008 John F. Sargent Specialist in Science and Technology Policy Resources, Science, and...REPORT DATE 20 MAY 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Nanotechnology : A Policy Primer 5a
ERIC Educational Resources Information Center
Takeuchi, Ken; Murakami, Manabu; Kato, Atsushi; Akiyama, Ryuichi; Honda, Hirotaka; Nozawa, Hajime; Sato, Ki-ichiro
2009-01-01
The Faculty of Industrial Science and Technology at Tokyo University of Science developed a two-campus system to produce well-trained engineers possessing both technical and humanistic traits. In their first year of study, students reside in dormitories in the natural setting of the Oshamambe campus located in Hokkaido, Japan. The education…
Managing an archive of weather satellite images
NASA Technical Reports Server (NTRS)
Seaman, R. L.
1992-01-01
The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Silica-on-silicon waveguide quantum circuits.
Politi, Alberto; Cryan, Martin J; Rarity, John G; Yu, Siyuan; O'Brien, Jeremy L
2008-05-02
Quantum technologies based on photons will likely require an integrated optics architecture for improved performance, miniaturization, and scalability. We demonstrate high-fidelity silica-on-silicon integrated optical realizations of key quantum photonic circuits, including two-photon quantum interference with a visibility of 94.8 +/- 0.5%; a controlled-NOT gate with an average logical basis fidelity of 94.3 +/- 0.2%; and a path-entangled state of two photons with fidelity of >92%. These results show that it is possible to directly "write" sophisticated photonic quantum circuits onto a silicon chip, which will be of benefit to future quantum technologies based on photons, including information processing, communication, metrology, and lithography, as well as the fundamental science of quantum optics.
Drama as Arts-Based Pedagogy and Research: Media Advertising and Inner-City Youth.
ERIC Educational Resources Information Center
Conrad, Diane
2002-01-01
A media unit for inner city high school students examined the relationship between youth and advertising by using drama as the medium through which learning and research occurred. Data were presented through scripted dramatic scenes. How the interpretation and generation of data were embedded in the process of writing these scripts is explained.…
A Model for Flexibly Editing CSCL Scripts
ERIC Educational Resources Information Center
Sobreira, Pericles; Tchounikine, Pierre
2012-01-01
This article presents a model whose primary concern and design rationale is to offer users (teachers) with basic ICT skills an intuitive, easy, and flexible way of editing scripts. The proposal is based on relating an end-user representation as a table and a machine model as a tree. The table-tree model introduces structural expressiveness and…
ERIC Educational Resources Information Center
Hong, Zeng-Wei; Chen, Yen-Lin; Lan, Chien-Ho
2014-01-01
Animated agents are virtual characters who demonstrate facial expressions, gestures, movements, and speech to facilitate students' engagement in the learning environment. Our research developed a courseware that supports a XML-based markup language and an authoring tool for teachers to script animated pedagogical agents in teaching materials. The…
ERIC Educational Resources Information Center
Ladd, Melissa
2016-01-01
This study strived to determine the effectiveness of the AR phonics program relative to the effectiveness of the scripted phonics program for developing the letter identification, sound verbalization, and blending abilities of kindergarten students considered at-risk based on state assessments. The researcher was interested in pretest and posttest…
ERIC Educational Resources Information Center
Toppel, Kathryn Elizabeth
2013-01-01
The increased focus on the implementation of scientifically research-based instruction as an outcome of No Child Left Behind ("Understanding NCLB," 2007) has resulted in the widespread use of scripted reading curricula (Dewitz, Leahy, Jones, and Sullivan, 2010), which typically represents Eurocentric and middle class forms of discourse,…
2017-03-07
Integrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings. LAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources. The prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations.
Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa
2017-08-01
Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty Health TAPESTRY team members (53% response rate) completed the NHS sustainability survey. The overall mean sustainability score was 64.6 (range 22.8-96.8). Important opportunities for improving sustainability were better staff involvement and training, clinical leadership engagement, and infrastructure for sustainability. Interviews with 25 participants (response rate 60%) showed that factors influencing the sustainability and scalability of Health TAPESTRY emerged across two dimensions: I) Health TAPESTRY operations (development and implementation activities undertaken by the central team); and II) the Health TAPESTRY intervention (factors specific to the intervention and its elements). Resource capacity appears to be an important factor to consider for Health TAPESTRY operations as it was identified across both sustainability and scalability factors; and perceived lack of interprofessional team and volunteer resource capacity and the need for stakeholder buy-in are important considerations for the Health TAPESTRY intervention. We used these findings to create actionable recommendations to initiate dialogue among Health TAPESTRY team members to improve the intervention. Our study identified sustainability and scalability determinants of the Health TAPESTRY intervention that can be used to optimize its potential for impact. Next steps will involve using findings to inform a guide to facilitate sustainability and scalability of Health TAPESTRY in other jurisdictions considering its adoption. Our findings build on the limited current knowledge of sustainability, and advances KT science related to the sustainability and scalability of KT interventions.
Soil CO2 flux from three ecosystems in tropical peatland of Sarawak, Malaysia
NASA Astrophysics Data System (ADS)
Melling, Lulie; Hatano, Ryusuke; Goh, Kah Joo
2005-02-01
Soil CO2 flux was measured monthly over a year from tropical peatland of Sarawak, Malaysia using a closed-chamber technique. The soil CO2 flux ranged from 100 to 533 mg C m
2 h
1 for the forest ecosystem, 63 to 245 mg C m
2 h
1 for the sago and 46 to 335 mg C m
2 h
1 for the oil palm. Based on principal component analysis (PCA), the environmental variables over all sites could be classified into three components, namely, climate, soil moisture and soil bulk density, which accounted for 86% of the seasonal variability. A regression tree approach showed that CO2 flux in each ecosystem was related to different underlying environmental factors. They were relative humidity for forest, soil temperature at 5 cm for sago and water-filled pore space for oil palm. On an annual basis, the soil CO2 flux was highest in the forest ecosystem with an estimated production of 2.1 kg C m
2 yr
1 followed by oil palm at 1.5 kg C m
2 yr
1 and sago at 1.1 kg C m
2 yr
1. The different dominant controlling factors in CO2 flux among the studied ecosystems suggested that land use affected the exchange of CO2 between tropical peatland and the atmosphere.
Nelson, Jenny; Emmott, Christopher J M
2013-08-13
Solar power represents a vast resource which could, in principle, meet the world's needs for clean power generation. Recent growth in the use of photovoltaic (PV) technology has demonstrated the potential of solar power to deliver on a large scale. Whilst the dominant PV technology is based on crystalline silicon, a wide variety of alternative PV materials and device concepts have been explored in an attempt to decrease the cost of the photovoltaic electricity. This article explores the potential for such emerging technologies to deliver cost reductions, scalability of manufacture, rapid carbon mitigation and new science in order to accelerate the uptake of solar power technologies.
A fully programmable 100-spin coherent Ising machine with all-to-all connections
NASA Astrophysics Data System (ADS)
McMahon, Peter; Marandi, Alireza; Haribara, Yoshitaka; Hamerly, Ryan; Langrock, Carsten; Tamate, Shuhei; Inagaki, Takahiro; Takesue, Hiroki; Utsunomiya, Shoko; Aihara, Kazuyuki; Byer, Robert; Fejer, Martin; Mabuchi, Hideo; Yamamoto, Yoshihisa
We present a scalable optical processor with electronic feedback, based on networks of optical parametric oscillators. The design of our machine is inspired by adiabatic quantum computers, although it is not an AQC itself. Our prototype machine is able to find exact solutions of, or sample good approximate solutions to, a variety of hard instances of Ising problems with up to 100 spins and 10,000 spin-spin connections. This research was funded by the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) Program of the Council of Science, Technology and Innovation (Cabinet Office, Government of Japan).
Stabilizing Entanglement via Symmetry-Selective Bath Engineering in Superconducting Qubits.
Kimchi-Schwartz, M E; Martin, L; Flurin, E; Aron, C; Kulkarni, M; Tureci, H E; Siddiqi, I
2016-06-17
Bath engineering, which utilizes coupling to lossy modes in a quantum system to generate nontrivial steady states, is a tantalizing alternative to gate- and measurement-based quantum science. Here, we demonstrate dissipative stabilization of entanglement between two superconducting transmon qubits in a symmetry-selective manner. We utilize the engineered symmetries of the dissipative environment to stabilize a target Bell state; we further demonstrate suppression of the Bell state of opposite symmetry due to parity selection rules. This implementation is resource efficient, achieves a steady-state fidelity F=0.70, and is scalable to multiple qubits.
Map_plot and bgg_plot: software for integration of geoscience datasets
NASA Astrophysics Data System (ADS)
Gaillot, Philippe; Punongbayan, Jane T.; Rea, Brice
2004-02-01
Since 1985, the Ocean Drilling Program (ODP) has been supporting multidisciplinary research in exploring the structure and history of Earth beneath the oceans. After more than 200 Legs, complementary datasets covering different geological environments, periods and space scales have been obtained and distributed world-wide using the ODP-Janus and Lamont Doherty Earth Observatory-Borehole Research Group (LDEO-BRG) database servers. In Earth Sciences, more than in any other science, the ensemble of these data is characterized by heterogeneous formats and graphical representation modes. In order to fully and quickly assess this information, a set of Unix/Linux and Generic Mapping Tool-based C programs has been designed to convert and integrate datasets acquired during the present ODP and the future Integrated ODP (IODP) Legs. Using ODP Leg 199 datasets, we show examples of the capabilities of the proposed programs. The program map_plot is used to easily display datasets onto 2-D maps. The program bgg_plot (borehole geology and geophysics plot) displays data with respect to depth and/or time. The latter program includes depth shifting, filtering and plotting of core summary information, continuous and discrete-sample core measurements (e.g. physical properties, geochemistry, etc.), in situ continuous logs, magneto- and bio-stratigraphies, specific sedimentological analyses (lithology, grain size, texture, porosity, etc.), as well as core and borehole wall images. Outputs from both programs are initially produced in PostScript format that can be easily converted to Portable Document Format (PDF) or standard image formats (GIF, JPEG, etc.) using widely distributed conversion programs. Based on command line operations and customization of parameter files, these programs can be included in other shell- or database-scripts, automating plotting procedures of data requests. As an open source software, these programs can be customized and interfaced to fulfill any specific plotting need of geoscientists using ODP-like datasets.
Implementing a distributed intranet-based information system.
O'Kane, K C; McColligan, E E; Davis, G A
1996-11-01
The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.
A comprehensive test of clinical reasoning for medical students: An olympiad experience in Iran
Monajemi, Alireza; Arabshahi, Kamran Soltani; Soltani, Akbar; Arbabi, Farshid; Akbari, Roghieh; Custers, Eugene; Hadadgar, Arash; Hadizadeh, Fatemeh; Changiz, Tahereh; Adibi, Peyman
2012-01-01
Background: Although some tests for clinical reasoning assessment are now available, the theories of medical expertise have not played a major role in this filed. In this paper, illness script theory was chose as a theoretical framework and contemporary clinical reasoning tests were put together based on this theoretical model. Materials and Methods: This paper is a qualitative study performed with an action research approach. This style of research is performed in a context where authorities focus on promoting their organizations’ performance and is carried out in the form of teamwork called participatory research. Results: Results are presented in four parts as basic concepts, clinical reasoning assessment, test framework, and scoring. Conclusion: we concluded that no single test could thoroughly assess clinical reasoning competency, and therefore a battery of clinical reasoning tests is needed. This battery should cover all three parts of clinical reasoning process: script activation, selection and verification. In addition, not only both analytical and non-analytical reasoning, but also both diagnostic and management reasoning should evenly take into consideration in this battery. This paper explains the process of designing and implementing the battery of clinical reasoning in the Olympiad for medical sciences students through an action research. PMID:23555113
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1993-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died. How Similar Is Big Food?
Brownell, Kelly D; Warner, Kenneth E
2009-01-01
Context: In 1954 the tobacco industry paid to publish the “Frank Statement to Cigarette Smokers” in hundreds of U.S. newspapers. It stated that the public's health was the industry's concern above all others and promised a variety of good-faith changes. What followed were decades of deceit and actions that cost millions of lives. In the hope that the food history will be written differently, this article both highlights important lessons that can be learned from the tobacco experience and recommends actions for the food industry. Methods: A review and analysis of empirical and historical evidence pertaining to tobacco and food industry practices, messages, and strategies to influence public opinion, legislation and regulation, litigation, and the conduct of science. Findings: The tobacco industry had a playbook, a script, that emphasized personal responsibility, paying scientists who delivered research that instilled doubt, criticizing the “junk” science that found harms associated with smoking, making self-regulatory pledges, lobbying with massive resources to stifle government action, introducing “safer” products, and simultaneously manipulating and denying both the addictive nature of their products and their marketing to children. The script of the food industry is both similar to and different from the tobacco industry script. Conclusions: Food is obviously different from tobacco, and the food industry differs from tobacco companies in important ways, but there also are significant similarities in the actions that these industries have taken in response to concern that their products cause harm. Because obesity is now a major global problem, the world cannot afford a repeat of the tobacco history, in which industry talks about the moral high ground but does not occupy it. PMID:19298423
An interactive HTML ocean nowcast GUI based on Perl and JavaScript
NASA Astrophysics Data System (ADS)
Sakalaukus, Peter J.; Fox, Daniel N.; Louise Perkins, A.; Smedstad, Lucy F.
1999-02-01
We describe the use of Hyper Text Markup Language (HTML), JavaScript code, and Perl I/O to create and validate forms in an Internet-based graphical user interface (GUI) for the Naval Research Laboratory (NRL) Ocean models and Assimilation Demonstration System (NOMADS). The resulting nowcast system can be operated from any compatible browser across the Internet, for although the GUI was prepared in a Netscape browser, it used no Netscape extensions. Code available at: http://www.iamg.org/CGEditor/index.htm
Meteorite, a rock from space: A planetarium adventure for children
NASA Astrophysics Data System (ADS)
Rodríguez Hidalgo, I.; Naveros Y Naveiras, R.; González Sánchez, O.
2008-06-01
At the Museum of the Science and the Cosmos (MCC, La Laguna, Tenerife) there is a small planetarium. All the different planetarium shows are carried out entirely by the Museum staff, from the original idea and the script to the final production. In February 2007, Meteorite, a rock from space, a new show, specifically for children, was released. The characters (astronomical bodies) are played by puppets, designed and manufactured for this occasion; the script has been carefully written, and introduces many astronomical concepts in the form of an entertaining tale, which encourages the children to participate by crying, counting, helping the characters - just like a traditional puppet show. The aim of this contribution is to review the different resources (some of them really innovative) used to create this programme, which offers plenty of future possibilities.
Marc Snir | Argonne National Laboratory
Molecular biology Proteomics Environmental science & technology Air quality Atmospheric & climate , H.S., Jr., Demonstrating the scalability of a molecular dynamics application on a Petaflop computer Transformations IGSBInstitute for Genomics and Systems Biology IMEInstitute for Molecular Engineering JCESRJoint
Primary Pre-Service Teachers' Skills in Planning a Guided Scientific Inquiry
ERIC Educational Resources Information Center
García-Carmona, Antonio; Criado, Ana M.; Cruz-Guzmán, Marta
2017-01-01
A study is presented of the skills that primary pre-service teachers (PPTs) have in completing the planning of a scientific inquiry on the basis of a guiding script. The sample comprised 66 PPTs who constituted a group-class of the subject "Science Teaching," taught in the second year of an undergraduate degree in primary education at a…
Variables, Decisions, and Scripting in Construct
2009-09-01
grounded in sociology and cognitive science which seeks to model the processes and situations by which humans interact and share information...Construct is an embodiment of constructuralism (Carley 1986), a theory which posits that human social structures and cognitive structures co-evolve so that...human cognition reflects human social behavior, and that human social behavior simultaneously influences cognitive processes. Recent work with
Social.Water--Open Source Citizen Science Software for CrowdHydrology
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Lowry, C.
2013-12-01
CrowdHydrology is a crowd-sourced citizen science project in which passersby near streams are encouraged to read a gage and send an SMS (text) message with the water level to a number indicated on a sign. The project was initially started using free services such as Google Voice, Gmail, and Google Maps to acquire and present the data on the internet. Social.Water is open-source software, using Python and JavaScript, that automates the acquisition, categorization, and presentation of the data. Open-source objectives pervade both the project and the software as the code is hosted at Github, only free scripting codes are used, and any person or organization can install a gage and join the CrowdHydrology network. In the first year, 10 sites were deployed in upstate New York, USA. In the second year, expansion to 44 sites throughout the upper Midwest USA was achieved. Comparison with official USGS and academic measurements have shown low error rates. Citizen participation varies greatly from site to site, so surveys or other social information is sought for insight into why some sites experience higher rates of participation than others.
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
A Software Engineering Paradigm for Quick-turnaround Earth Science Data Projects
NASA Astrophysics Data System (ADS)
Moore, K.
2016-12-01
As is generally the case with applied sciences professional and educational programs, the participants of such programs can come from a variety of technical backgrounds. In the NASA DEVELOP National Program, the participants constitute an interdisciplinary set of backgrounds, with varying levels of experience with computer programming. DEVELOP makes use of geographically explicit data sets, and it is necessary to use geographic information systems and geospatial image processing environments. As data sets cover longer time spans and include more complex sets of parameters, automation is becoming an increasingly prevalent feature. Though platforms such as ArcGIS, ERDAS Imagine, and ENVI facilitate the batch-processing of geospatial imagery, these environments are naturally constricting to the user in that they limit him or her to the tools that are available. Users must then turn to "homemade" scripting in more traditional programming languages such as Python, JavaScript, or R, to automate workflows. However, in the context of quick-turnaround projects like those in DEVELOP, the programming learning curve may be prohibitively steep. In this work, we consider how to best design a software development paradigm that addresses two major constants: an arbitrarily experienced programmer and quick-turnaround project timelines.
Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2008 Symposium
2009-07-07
article, we review recent progress on a highly 61 ROLL PRINTING OF CRYSTALliNE NANOWIRES efficient, scalable approach for the ordered, unifonn...NATIONAL ACADEMIES Advisers to the Nation on Science, Engineering, and Medicine The National Academy of Sciences is a private, nonprofit, self...target delivery of a therapy to a particular physiological system, minimizing systemic side effects. Talks in the session provided an overview of
Building Scalable Knowledge Graphs for Earth Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.
2017-12-01
Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.
Frijling, Jessie L; van Zuiden, Mirjam; Koch, Saskia B J; Nawijn, Laura; Veltman, Dick J; Olff, Miranda
2016-04-01
Approximately 10% of trauma-exposed individuals go on to develop post-traumatic stress disorder (PTSD). Neural emotion regulation may be etiologically involved in PTSD development. Oxytocin administration early post-trauma may be a promising avenue for PTSD prevention, as intranasal oxytocin has previously been found to affect emotion regulation networks in healthy individuals and psychiatric patients. In a randomized double-blind placebo-controlled between-subjects functional magnetic resonance (fMRI) study, we assessed the effects of a single intranasal oxytocin administration (40 IU) on seed-based amygdala resting-state FC with emotion regulation areas (ventromedial prefrontal cortex (vmPFC), ventrolateral prefrontal cortex (vlPFC)), and salience processing areas (insula, dorsal anterior cingulate cortex (dACC)) in 37 individuals within 11 days post trauma. Two resting-state scans were acquired; one after neutral- and one after trauma-script-driven imagery. We found that oxytocin administration reduced amygdala-left vlPFC FC after trauma script-driven imagery, compared with neutral script-driven imagery, whereas in PL-treated participants enhanced amygdala-left vlPFC FC was observed following trauma script-driven imagery. Irrespective of script condition, oxytocin increased amygdala-insula FC and decreased amygdala-vmPFC FC. These neural effects were accompanied by lower levels of sleepiness and higher flashback intensity in the oxytocin group after the trauma script. Together, our findings show that oxytocin administration may impede emotion regulation network functioning in response to trauma reminders in recently trauma-exposed individuals. Therefore, caution may be warranted in administering oxytocin to prevent PTSD in distressed, recently trauma-exposed individuals.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
NASA Astrophysics Data System (ADS)
King, B. A.
2017-12-01
Worldview is a high-traffic web mapping application created using the JavaScript mapping library, OpenLayers. This presentation will primarily focus on three new features: A wrapping component that seamlessly shows satellite imagery over the dateline where most maps either stop or wrap the imagery of the same date. An animation feature that allows users to select date ranges over which they can animate. An A/B comparison feature that gives users the power to compare imagery between dates and layers. In response to an increasingly large codebase caused by ongoing feature requests, Worldview is transitioning to a smaller core codebase comprised of external reusable modules. When creating a module with the intention of having someone else reuse it for a different task, one inherently starts generating code that is easier to read and easier to maintain. This presentation will show demos of these features and cover development techniques used to create them.
A GPU accelerated PDF transparency engine
NASA Astrophysics Data System (ADS)
Recker, John; Lin, I.-Jong; Tastl, Ingeborg
2011-01-01
As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.
Durham, Erin-Elizabeth A; Yu, Xiaxia; Harrison, Robert W
2014-12-01
Effective machine-learning handles large datasets efficiently. One key feature of handling large data is the use of databases such as MySQL. The freeware fuzzy decision tree induction tool, FDT, is a scalable supervised-classification software tool implementing fuzzy decision trees. It is based on an optimized fuzzy ID3 (FID3) algorithm. FDT 2.0 improves upon FDT 1.0 by bridging the gap between data science and data engineering: it combines a robust decisioning tool with data retention for future decisions, so that the tool does not need to be recalibrated from scratch every time a new decision is required. In this paper we briefly review the analytical capabilities of the freeware FDT tool and its major features and functionalities; examples of large biological datasets from HIV, microRNAs and sRNAs are included. This work shows how to integrate fuzzy decision algorithms with modern database technology. In addition, we show that integrating the fuzzy decision tree induction tool with database storage allows for optimal user satisfaction in today's Data Analytics world.
Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches
Muin, Michael; Fontelo, Paul
2006-01-01
Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729
ERIC Educational Resources Information Center
Gagnon, Robert; Lubarsky, Stuart; Lambert, Carole; Charlin, Bernard
2011-01-01
The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool's discriminatory power, but deviant answers could potentially diminish the…
Semantic Memory Organization in Young Children: The Script-Based Categorization of Early Words.
ERIC Educational Resources Information Center
Maaka, Margaret J.; Wong, Eddie K.
This study examined whether scripts provide a basis for the categories preschool children use to structure their semantic memories and whether the use of taxonomies to structure memory becomes more common only after children enter elementary school. Subjects were 108 children in three equal groups of 18 boys and 18 girls children each of 4-, 5-,…
Bilingual Writing as an Act of Identity: Sign-Making in Multiple Scripts
ERIC Educational Resources Information Center
Kabuto, Bobbie
2010-01-01
This article explores early bilingual script writing as an act of identity. Using multiple theoretical perspectives related to social semiotics and social constructivist perspectives on identity and writing, the research presented in this article is based on a case study of an early biliterate learner of Japanese and English from the ages of 3-7.…
Autonomic correlates of physical and moral disgust.
Ottaviani, Cristina; Mancini, Francesco; Petrocchi, Nicola; Medea, Barbara; Couyoumdjian, Alessandro
2013-07-01
Given that the hypothesis of a common origin of physical and moral disgust has received sparse empirical support, this study aimed to shed light on the subjective and autonomic signatures of these two facets of the same emotional response. Participants (20 men, 20 women) were randomly assigned to physical or moral disgust induction by the use of audio scripts while their electrocardiogram was continuously recorded. Affect ratings were obtained before and after the induction. Time and frequency domain heart rate variability (HRV) measures were obtained. After controlling for disgust sensitivity (DS-R) and obsessive-compulsive (OCI-R) tendencies, both scripts elicited disgust but whereas the physical script elicited a feeling of dirtiness, the moral script evoked more indignation and contempt. The disgust-induced subjective responses were associated with opposite patterns of autonomic reactivity: enhanced activity of the parasympathetic nervous system without concurrent changes in heart rate (HR) for physical disgust and decreased vagal tone and increased HR and autonomic imbalance for moral disgust. Results suggest that immorality relies on the same biological root of physical disgust only in subjects with obsessive compulsive tendencies. Disgust appears to be a heterogeneous response that varies based on the individuals' contamination-based appraisal. Copyright © 2013 Elsevier B.V. All rights reserved.
Userscripts for the life sciences.
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-12-21
The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.
Userscripts for the Life Sciences
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-01-01
Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664
Design and development of a web-based application for diabetes patient data management.
Deo, S S; Deobagkar, D N; Deobagkar, Deepti D
2005-01-01
A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
Teaching Thousands with Cloud-based GIS
NASA Astrophysics Data System (ADS)
Gould, Michael; DiBiase, David; Beale, Linda
2016-04-01
Teaching Thousands with Cloud-based GIS Educators often draw a distinction between "teaching about GIS" and "teaching with GIS." Teaching about GIS involves helping students learn what GIS is, what it does, and how it works. On the other hand, teaching with GIS involves using the technology as a means to achieve education objectives in the sciences, social sciences, professional disciplines like engineering and planning, and even the humanities. The same distinction applies to CyberGIS. Understandably, early efforts to develop CyberGIS curricula and educational resources tend to be concerned primarily with CyberGIS itself. However, if CyberGIS becomes as functional, usable and scalable as it aspires to be, teaching with CyberGIS has the potential to enable large and diverse global audiences to perform spatial analysis using hosted data, mapping and analysis services all running in the cloud. Early examples of teaching tens of thousands of students across the globe with cloud-based GIS include the massive open online courses (MOOCs) offered by Penn State University and others, as well as the series of MOOCs more recently developed and offered by Esri. In each case, ArcGIS Online was used to help students achieve educational objectives in subjects like business, geodesign, geospatial intelligence, and spatial analysis, as well as mapping. Feedback from the more than 100,000 total student participants to date, as well as from the educators and staff who supported these offerings, suggest that online education with cloud-based GIS is scalable to very large audiences. Lessons learned from the course design, development, and delivery of these early examples may be useful in informing the continuing development of CyberGIS education. While MOOCs may have passed the peak of their "hype cycle" in higher education, the phenomenon they revealed persists: namely, a global mass market of educated young adults who turn to free online education to expand their horizons. The ability of CyberGIS to attract and effectively serve this market may be one measure of its success.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
XPRESS: eXascale PRogramming Environment and System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brightwell, Ron; Sterling, Thomas; Koniges, Alice
The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.
NASA Technical Reports Server (NTRS)
2006-01-01
The Global Change Master Directory (GCMD) has been one of the best known Earth science and global change data discovery online resources throughout its extended operational history. The growing popularity of the system since its introduction on the World Wide Web in 1994 has created an environment where resolving issues of scalability, security, and interoperability have been critical to providing the best available service to the users and partners of the GCMD. Innovative approaches developed at the GCMD in these areas will be presented with a focus on how they relate to current and future GO-ESSP community needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
RGG: A general GUI Framework for R scripts
Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert
2009-01-01
Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356
Simplifying and enhancing the use of PyMOL with horizontal scripts
2016-01-01
Abstract Scripts are used in PyMOL to exert precise control over the appearance of the output and to ease remaking similar images at a later time. We developed horizontal scripts to ease script development. A horizontal script makes a complete scene in PyMOL like a traditional vertical script. The commands in a horizontal script are separated by semicolons. These scripts are edited interactively on the command line with no need for an external text editor. This simpler workflow accelerates script development. In using PyMOL, the illustration of a molecular scene requires an 18‐element matrix of view port settings. The default format spans several lines and is laborious to manually reformat for one line. This default format prevents the fast assembly of horizontal scripts that can reproduce a molecular scene. We solved this problem by writing a function that displays the settings on one line in a compact format suitable for horizontal scripts. We also demonstrate the mapping of aliases to horizontal scripts. Many aliases can be defined in a single script file, which can be useful for applying costume molecular representations to any structure. We also redefined horizontal scripts as Python functions to enable the use of the help function to print documentation about an alias to the command history window. We discuss how these methods of using horizontal scripts both simplify and enhance the use of PyMOL in research and education. PMID:27488983
QR Codes: Outlook for Food Science and Nutrition.
Sanz-Valero, Javier; Álvarez Sabucedo, Luis M; Wanden-Berghe, Carmina; Santos Gago, Juan M
2016-01-01
QR codes opens up the possibility to develop simple-to-use, cost-effective-cost, and functional systems based on the optical recognition of inexpensive tags attached to physical objects. These systems, combined with Web platforms, can provide us with advanced services that are already currently broadly used on many contexts of the common life. Due to its philosophy, based on the automatic recognition of messages embedded on simple graphics by means of common devices such as mobile phones, QR codes are very convenient for the average user. Regretfully, its potential has not yet been fully exploited in the domains of food science and nutrition. This paper points out some applications to make the most of this technology for these domains in a straightforward manner. For its characteristics, we are addressing systems with low barriers to entry and high scalability for its deployment. Therefore, its launching among professional and final users is quite simple. The paper also provides high-level indications for the evaluation of the technological frame required to implement the identified possibilities of use.
Access to the NCAR Research Data Archive via the Globus Data Transfer Service
NASA Astrophysics Data System (ADS)
Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.
2014-12-01
The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2013, 10,000 unique users downloaded greater than 820 terabytes of data from the RDA, and customized data products were prepared for more than 29,000 user-driven requests. In order to further support this increase in web download usage, the RDA is implementing the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for RDA users. This paper highlights the main functionality and usefulness of the Globus data transfer service for accessing the RDA holdings. The Globus data transfer service, developed and supported by the Computation Institute at The University of Chicago and Argonne National Laboratory, uses the GridFTP as a fast, secure, and reliable method for transferring data between two endpoints. A Globus user account is required to use this service, and data transfer endpoints are defined on the Globus web interface. In the RDA use cases, the access endpoint is created on the RDA data server at NCAR. The data user defines the receiving endpoint for the data transfer, which can be the main file system at a host institution, a personal work station, or laptop. Once initiated, the data transfer runs as an unattended background process by Globus, and Globus ensures that the transfer is accurately fulfilled. Users can monitor the data transfer progress on the Globus web interface and optionally receive an email notification once it is complete. Globus also provides a command-line interface to support scripted transfers, which can be useful when embedded in data processing workflows.
AMS data production facilities at science operations center at CERN
NASA Astrophysics Data System (ADS)
Choutko, V.; Egorov, A.; Eline, A.; Shan, B.
2017-10-01
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Youngjoo; Kim, Keeman.
1991-01-01
An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digitalmore » I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs.« less
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Understanding Life : The Evolutionary Dynamics of Complexity and Semiosis
NASA Astrophysics Data System (ADS)
Loeckenhoff, Helmut K.
2010-11-01
Post-Renaissance sciences created different cultures. To establish an epistemological base, Physics were separated from the Mental domain. Consciousness was excluded from science. Life Sciences were left in between e.g. LaMettrie's `man—machine' (1748) and 'vitalism' [e.g. Bergson 4]. Causative thinking versus intuitive arguing limited strictly comprehensive concepts. First ethology established a potential shared base for science, proclaiming the `biology paradigm' in the middle of the 20th century. Initially procured by Cybernetics and Systems sciences, `constructivist' models prepared a new view on human perception and thus also of scientific `objectivity when introducing the `observer'. In sequel Computer sciences triggered the ICT revolution. In turn ICT helped to develop Chaos and Complexity sciences, Non-linear Mathematics and its spin-offs in the formal sciences [Spencer-Brown 49] as e.g. (proto-)logics. Models of life systems, as e.g. Anticipatory Systems, integrated epistemology with mathematics and Anticipatory Computing [Dubois 11, 12, 13, 14] connecting them with Semiotics. Seminal ideas laid in the turn of the 19th to the 20th century [J. v. Uexküll 53] detected the co-action and co-evolvement of environments and life systems. Bio-Semiotics ascribed purpose, intent and meaning as essential qualities of life. The concepts of Systems Biology and Qualitative Research enriched and develop also anthropologies and humanities. Brain research added models of (higher) consciousness. An avant-garde is contemplating a science including consciousness as one additional base. New insights from the extended qualitative approach led to re-conciliation of basic assumptions of scientific inquiry, creating the `epistemological turn'. Paradigmatically, resting on macro- micro- and recently on nano-biology, evolution biology sired fresh scripts of evolution [W. Wieser 60,61]. Its results tie to hypotheses describing the emergence of language, of the human mind and of culture [e.g. R. Logan 34]. The different but related approaches are yet but loosely connected. Recent efforts search for a shared foundation e.g. in a set of Transdisciplinary base models [Loeckenhoff 30, 31]. The domain of pure mental constructions as ideologies/religions and spiritual phenomena will be implied.
Scalable Quantum Networks for Distributed Computing and Sensing
2016-04-01
probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01
Theater in Physics Teacher Education
NASA Astrophysics Data System (ADS)
van den Berg, Ed
2009-09-01
Ten years ago I sat down with the first batch of students in our science/math teacher education program in the Philippines, then third-year students, and asked them what they could do for the opening of the new science building. One of them pulled a stack of papers out of his bag and put it in front of me: a complete script for a science play! This was beyond expectation. The play was practiced several times for groups of high school students visiting the science exhibition that was also organized by the students. During the opening of our building, the play was performed for visiting dignitaries including the Assistant Secretary for Education, Culture, and Sports. It was a great success! The cast got invited to present their production at a number of places and occasions.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
Findley-Van Nostrand, Danielle; Pollenz, Richard S.
2017-01-01
The persistence of undergraduate students in science, technology, engineering, and mathematics (STEM) disciplines is a national issue based on STEM workforce projections. We implemented a weeklong pre–college engagement STEM Academy (SA) program aimed at addressing several areas related to STEM retention. We validated an instrument that was developed based on existing, validated measures and examined several psychosocial constructs related to STEM (science identity, self-efficacy, sense of belonging to the university and to STEM, career expectancies, and intention to leave STEM majors) before and after the program. We also compared students in the SA program with a matched comparison group of first-year students. Results show that SA students significantly increased in science identity and sense of belonging to STEM and to the university, all predictive of increased STEM retention and a primary aim of the program. Relative to the matched comparison group, SA students began their first semester with higher STEM self-efficacy, sense of belonging, and science identity, positive career expectancies, and lower intention to leave STEM. The SA cohort showed 98% first-year retention and 92% STEM major retention. The SA program serves as a model of a scalable, first-level, cocurricular engagement experience to enhance psychosocial factors that impact undergraduate persistence in STEM. PMID:28572178
Principles of effective USA federal fire management plans
Meyer, Marc D.; Roberts, Susan L.; Wills, Robin; Brooks, Matthew L.; Winford, Eric M.
2015-01-01
Federal fire management plans are essential implementation guides for the management of wildland fire on federal lands. Recent changes in federal fire policy implementation guidance and fire science information suggest the need for substantial changes in federal fire management plans of the United States. Federal land management agencies are also undergoing land management planning efforts that will initiate revision of fire management plans across the country. Using the southern Sierra Nevada as a case study, we briefly describe the underlying framework of fire management plans, assess their consistency with guiding principles based on current science information and federal policy guidance, and provide recommendations for the development of future fire management plans. Based on our review, we recommend that future fire management plans be: (1) consistent and compatible, (2) collaborative, (3) clear and comprehensive, (4) spatially and temporally scalable, (5) informed by the best available science, and (6) flexible and adaptive. In addition, we identify and describe several strategic guides or “tools” that can enhance these core principles and benefit future fire management plans in the following areas: planning and prioritization, science integration, climate change adaptation, partnerships, monitoring, education and communication, and applied fire management. These principles and tools are essential to successfully realize fire management goals and objectives in a rapidly changing world.
Sexual scripts among young heterosexually active men and women: continuity and change.
Masters, N Tatiana; Casey, Erin; Wells, Elizabeth A; Morrison, Diane M
2013-01-01
Whereas gendered sexual scripts are hegemonic at the cultural level, research suggests they may be less so at dyadic and individual levels. Understanding "disjunctures" between sexual scripts at different levels holds promise for illuminating mechanisms through which sexual scripts can change. Through interviews with 44 heterosexually active men and women aged 18 to 25, the ways young people grappled with culture-level scripts for sexuality and relationships were delineated. Findings suggest that, although most participants' culture-level gender scripts for behavior in sexual relationships were congruent with descriptions of traditional masculine and feminine sexuality, there was heterogeneity in how or whether these scripts were incorporated into individual relationships. Specifically, three styles of working with sexual scripts were found: conforming, in which personal gender scripts for sexual behavior overlapped with traditional scripts; exception-finding, in which interviewees accepted culture-level gender scripts as a reality, but created exceptions to gender rules for themselves; and transforming, in which participants either attempted to remake culture-level gender scripts or interpreted their own nontraditional styles as equally normative. Changing sexual scripts can potentially contribute to decreased gender inequity in the sexual realm and to increased opportunities for sexual satisfaction, safety, and well-being, particularly for women, but for men as well.
Recent Advances in Solar Sail Propulsion at NASA
NASA Technical Reports Server (NTRS)
Johnson, Les; Young, Roy M.; Montgomery, Edward E., IV
2006-01-01
Supporting NASA's Science Mission Directorate, the In-Space Propulsion Technology Program is developing solar sail propulsion for use in robotic science and exploration of the solar system. Solar sail propulsion will provide longer on-station operation, increased scientific payload mass fraction, and access to previously inaccessible orbits for multiple potential science missions. Two different 20-meter solar sail systems were produced and successfully completed functional vacuum testing last year in NASA Glenn's Space Power Facility at Plum Brook Station, Ohio. The sails were designed and developed by ATK Space Systems and L'Garde, respectively. These sail systems consist of a central structure with four deployable booms that support the sails. This sail designs are robust enough for deployments in a one atmosphere, one gravity environment, and are scalable to much larger solar sails-perhaps as much as 150 meters on a side. In addition, computation modeling and analytical simulations have been performed to assess the scalability of the technology to the large sizes (>150 meters) required for first generation solar sails missions. Life and space environmental effects testing of sail and component materials are also nearly complete. This paper will summarize recent technology advancements in solar sails and their successful ambient and vacuum testing.
Scripts or Components? A Comparative Study of Basic Emotion Knowledge in Roma and Non-Roma Children
ERIC Educational Resources Information Center
Giménez-Dasí, Marta; Quintanilla, Laura; Lucas-Molina, Beatriz
2018-01-01
The basic aspects of emotional comprehension seem to be acquired around the age of 5. However, it is not clear whether children's emotion knowledge is based on facial expression, organized in scripts, or determined by sociocultural context. This study aims to shed some light on these subjects by assessing knowledge of basic emotions in 4- and…
The Development of Videos in Culturally Grounded Drug Prevention for Rural Native Hawaiian Youth
ERIC Educational Resources Information Center
Okamoto, Scott K.; Helm, Susana; McClain, Latoya L.; Dinson, Ay-Laina
2012-01-01
The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the…
ERIC Educational Resources Information Center
Hanley, Mary Stone
This paper is an analysis of a project that involved African American middle school students in a drama program that was based on their lives and the stories of their community. Students were trained in performance skills, participated in the development of a script, and then performed the script in local schools. The 10 student participants, 5…
ERIC Educational Resources Information Center
Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.
2015-01-01
Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in…
ERIC Educational Resources Information Center
Sng, Cheong Ying; Carter, Mark; Stephenson, Jennifer
2017-01-01
Scripts in written or auditory form have been used to teach conversational skills to individuals with autism spectrum disorder (ASD), but with the proliferation of handheld tablet devices the scope to combine these 2 formats has broadened. The aim of this pilot study was to investigate if a script-based intervention, presented on an iPad…
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
2011-01-01
Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
High performance geospatial and climate data visualization using GeoJS
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Beezley, J. D.
2015-12-01
GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring data and analysis regarding 1) the human trafficking domain, 2) New York City taxi drop-offs and pick-ups, and 3) the Ebola outbreak. GeoJS supports advanced visualization features such as picking and selecting, as well as clustering. It also supports 2D contour plots, vector plots, heat maps, and geospatial graphs.
Sparks Will Fly: engineering creative script conflicts
NASA Astrophysics Data System (ADS)
Veale, Tony; Valitutti, Alessandro
2017-10-01
Scripts are often dismissed as the stuff of good movies and bad politics. They codify cultural experience so rigidly that they remove our freedom of choice and become the very antithesis of creativity. Yet, mental scripts have an important role to play in our understanding of creative behaviour, since a deliberate departure from an established script can produce results that are simultaneously novel and familiar, especially when others stick to the conventional script. Indeed, creative opportunities often arise at the overlapping boundaries of two scripts that antagonistically compete to mentally organise the same situation. This work explores the computational integration of competing scripts to generate creative friction in short texts that are surprising but meaningful. Our exploration considers conventional macro-scripts - ordered sequences of actions - and the less obvious micro-scripts that operate at even the lowest levels of language. For the former, we generate plots that squeeze two scripts into a single mini-narrative; for the latter, we generate ironic descriptions that use conflicting scripts to highlight the speaker's pragmatic insincerity. We show experimentally that verbal irony requires both kinds of scripts - macro and micro - to work together to reliably generate creative sparks from a speaker's subversive intent.
Lubarsky, Stuart; Dory, Valérie; Audétat, Marie-Claude; Custers, Eugène; Charlin, Bernard
2015-01-01
Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals' interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called 'illness scripts' that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.
Understanding and Using the Fermi Science Tools
NASA Astrophysics Data System (ADS)
Asercion, Joseph
2018-01-01
The Fermi Science Support Center (FSSC) provides information, documentation, and tools for the analysis of Fermi science data, including both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). Source and binary versions of the Fermi Science Tools can be downloaded from the FSSC website, and are supported on multiple platforms. An overview document, the Cicerone, provides details of the Fermi mission, the science instruments and their response functions, the science data preparation and analysis process, and interpretation of the results. Analysis Threads and a reference manual available on the FSSC website provide the user with step-by-step instructions for many different types of data analysis: point source analysis - generating maps, spectra, and light curves, pulsar timing analysis, source identification, and the use of python for scripting customized analysis chains. We present an overview of the structure of the Fermi science tools and documentation, and how to acquire them. We also provide examples of standard analyses, including tips and tricks for improving Fermi science analysis.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Enabling Remote and Automated Operations at The Red Buttes Observatory
NASA Astrophysics Data System (ADS)
Ellis, Tyler G.; Jang-Condell, Hannah; Kasper, David; Yeigh, Rex R.
2016-01-01
The Red Buttes Observatory (RBO) is a 60 centimeter Cassegrain telescope located ten miles south of Laramie, Wyoming. The size and proximity of the telescope comfortably make the site ideal for remote and automated observations. This task required development of confidence in control systems for the dome, telescope, and camera. Python and WinSCP script routines were created for the management of science images and weather. These scripts control the observatory via the ASCOM standard libraries and allow autonomous operation after initiation.The automation tasks were completed primarily to rejuvenate an aging and underutilized observatory with hopes to contribute to an international exoplanet hunting team with other interests in potentially hazardous asteroid detection. RBO is owned and operated solely by the University of Wyoming. The updates and proprietor status have encouraged the development of an undergraduate astronomical methods course including hands-on experience with a research telescope, a rarity in bachelor programs for astrophysics.
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie
2008-10-01
During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.
Scalable Molecular Dynamics with NAMD
Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus
2008-01-01
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654
NASA Astrophysics Data System (ADS)
Hosier, Julie Winchester
Integration of subjects is something elementary teachers must do to insure required objectives are covered. Science-based Reader's Theatre is one way to weave reading into science. This study examined the roles of frequency, attitudes, and Multiple Intelligence modalities surrounding Electricity Content-Based Reader's Theatre. This study used quasi-experimental, repeated measures ANOVA with time as a factor design. A convenience sample of two fifth-grade classrooms participated in the study for eighteen weeks. Five Electricity Achievement Tests were given throughout the study to assess students' growth. A Student Reader's Theatre Attitudinal Survey revealed students' attitudes before and after Electricity Content-Based Reader's Theatre treatment. The Multiple Intelligence Inventory for Kids (Faris, 2007) examined whether Multiple Intelligence modality played a role in achievement on Electricity Test 4, the post-treatment test. Analysis using repeated measures ANOVA and an independent t-test found that students in the experimental group, which practiced its student-created Electricity Content-Based Reader's Theatre skits ten times versus two times for the for control group, did significantly better on Electricity Achievement Test 4, t(76) = 3.018, p = 0.003. Dependent t-tests did not find statistically significant differences between students' attitudes about Electricity Content-Based Reader's Theatre before and after treatment. A Kruskal-Wallis test found no statistically significant difference between the various Multiple Intelligence modalities score mean ranks (x2 = 5.57, df = 2, alpha = .062). Qualitative data do, however, indicate students had strong positive feelings about Electricity Content-Based Reader's Theatre after treatment. Students indicated it to be motivating, confidence-building, and a fun way to learn about science; however, they disliked writing their own scripts. Examining the frequency, attitudes, and Multiple Intelligence modalities lead to the conclusion that the role of frequency had the greatest impact on the success of Electricity Content-Based Reader's Theatre. The participating teachers, students, and research found integrating science and reading through Electricity Content-Based Reader's Theatre beneficial.
PyMOOSE: Interoperable Scripting in Python for MOOSE
Ray, Subhasis; Bhalla, Upinder S.
2008-01-01
Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924
Writers Identification Based on Multiple Windows Features Mining
NASA Astrophysics Data System (ADS)
Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila
2016-03-01
Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.
NASA Astrophysics Data System (ADS)
Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc
2017-01-01
Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.
Magnetoresistive magnetometer for space science applications
NASA Astrophysics Data System (ADS)
Brown, P.; Beek, T.; Carr, C.; O'Brien, H.; Cupido, E.; Oddy, T.; Horbury, T. S.
2012-02-01
Measurement of the in situ dc magnetic field on space science missions is most commonly achieved using instruments based on fluxgate sensors. Fluxgates are robust, reliable and have considerable space heritage; however, their mass and volume are not optimized for deployment on nano or picosats. We describe a new magnetometer design demonstrating science measurement capability featuring significantly lower mass, volume and to a lesser extent power than a typical fluxgate. The instrument employs a sensor based on anisotropic magnetoresistance (AMR) achieving a noise floor of less than 50 pT Hz-1/2 above 1 Hz on a 5 V bridge bias. The instrument range is scalable up to ±50 000 nT and the three-axis sensor mass and volume are less than 10 g and 10 cm3, respectively. The ability to switch the polarization of the sensor's easy axis and apply magnetic feedback is used to build a driven first harmonic closed loop system featuring improved linearity, gain stability and compensation of the sensor offset. A number of potential geospace applications based on the initial instrument results are discussed including attitude control systems and scientific measurement of waves and structures in the terrestrial magnetosphere. A flight version of the AMR magnetometer will fly on the TRIO-CINEMA mission due to be launched in 2012.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Chen, Yiping; Fu, Shimin; Iversen, Susan D; Smith, Steve M; Matthews, Paul M
2002-10-01
Chinese offers a unique tool for testing the effects of word form on language processing during reading. The processes of letter-mediated grapheme-to-phoneme translation and phonemic assembly (assembled phonology) critical for reading and spelling in any alphabetic orthography are largely absent when reading nonalphabetic Chinese characters. In contrast, script-to-sound translation based on the script as a whole (addressed phonology) is absent when reading the Chinese alphabetic sound symbols known as pinyin, for which the script-to-sound translation is based exclusively on assembled phonology. The present study aims to contrast patterns of brain activity associated with the different cognitive mechanisms needed for reading the two scripts. fMRI was used with a block design involving a phonological and lexical task in which subjects were asked to decide whether visually presented, paired Chinese characters or pinyin "sounded like" a word. Results demonstrate that reading Chinese characters and pinyin activate a common brain network including the inferior frontal, middle, and inferior temporal gyri, the inferior and superior parietal lobules, and the extrastriate areas. However, some regions show relatively greater activation for either pinyin or Chinese reading. Reading pinyin led to a greater activation in the inferior parietal cortex bilaterally, the precuneus, and the anterior middle temporal gyrus. In contrast, activation in the left fusiform gyrus, the bilateral cuneus, the posterior middle temporal, the right inferior frontal gyrus, and the bilateral superior frontal gyrus were greater for nonalphabetic Chinese reading. We conclude that both alphabetic and nonalphabetic scripts activate a common brain network for reading. Overall, there are no differences in terms of hemispheric specialization between alphabetic and nonalphabetic scripts. However, differences in language surface form appear to determine relative activation in other regions. Some of these regions (e.g., the inferior parietal cortex for pinyin and fusiform gyrus for Chinese characters) are candidate regions for specialized processes associated with reading via predominantly assembled (pinyin) or addressed (Chinese character) procedures.
Not letting the perfect be the enemy of the good: steps toward science-ready ALMA images
NASA Astrophysics Data System (ADS)
Kepley, Amanda A.; Donovan Meyer, Jennifer; Brogan, Crystal; Moullet, Arielle; Hibbard, John; Indebetouw, Remy; Mason, Brian
2016-07-01
Historically, radio observatories have placed the onus of calibrating and imaging data on the observer, thus restricting their user base to those already initiated into the mysteries of radio data or those willing to develop these skills. To expand its user base, the Atacama Large Millimeter/submillimeter Array (ALMA) has a high- level directive to calibrate users' data and, ultimately, to deliver scientifically usable images or cubes to principle investigators (PIs). Although an ALMA calibration pipeline is in place, all delivered images continue to be produced for the PI by hand. In this talk, I will describe on-going efforts at the Northern American ALMA Science Center to produce more uniform imaging products that more closely meet the PI science goals and provide better archival value. As a first step, the NAASC imaging group produced a simple imaging template designed to help scientific staff produce uniform imaging products. This script allowed the NAASC to maximize the productivity of data analysts with relatively little guidance by the scientific staff by providing a step-by-step guide to best practices for ALMA imaging. Finally, I will describe the role of the manually produced images in verifying the imaging pipeline and the on-going development of said pipeline. The development of the imaging template, while technically simple, shows how small steps toward unifying processes and sharing knowledge can lead to large gains for science data products.
Leveraging Citizen Science and Information Technology for Population Physical Activity Promotion.
King, Abby C; Winter, Sandra J; Sheats, Jylana L; Rosas, Lisa G; Buman, Matthew P; Salvo, Deborah; Rodriguez, Nicole M; Seguin, Rebecca A; Moran, Mika; Garber, Randi; Broderick, Bonnie; Zieff, Susan G; Sarmiento, Olga Lucia; Gonzalez, Silvia A; Banchoff, Ann; Dommarco, Juan Rivera
2016-05-15
While technology is a major driver of many of society's comforts, conveniences, and advances, it has been responsible, in a significant way, for engineering regular physical activity and a number of other positive health behaviors out of people's daily lives. A key question concerns how to harness information and communication technologies (ICT) to bring about positive changes in the health promotion field. One such approach involves community-engaged "citizen science," in which local residents leverage the potential of ICT to foster data-driven consensus-building and mobilization efforts that advance physical activity at the individual, social, built environment, and policy levels. The history of citizen science in the research arena is briefly described and an evidence-based method that embeds citizen science in a multi-level, multi-sectoral community-based participatory research framework for physical activity promotion is presented. Several examples of this citizen science-driven community engagement framework for promoting active lifestyles, called "Our Voice", are discussed, including pilot projects from diverse communities in the U.S. as well as internationally. The opportunities and challenges involved in leveraging citizen science activities as part of a broader population approach to promoting regular physical activity are explored. The strategic engagement of citizen scientists from socio-demographically diverse communities across the globe as both assessment as well as change agents provides a promising, potentially low-cost and scalable strategy for creating more active, healthful, and equitable neighborhoods and communities worldwide.
Leveraging Citizen Science and Information Technology for Population Physical Activity Promotion
King, Abby C.; Winter, Sandra J.; Sheats, Jylana L.; Rosas, Lisa G.; Buman, Matthew P.; Salvo, Deborah; Rodriguez, Nicole M.; Seguin, Rebecca A.; Moran, Mika; Garber, Randi; Broderick, Bonnie; Zieff, Susan G.; Sarmiento, Olga Lucia; Gonzalez, Silvia A.; Banchoff, Ann; Dommarco, Juan Rivera
2016-01-01
PURPOSE While technology is a major driver of many of society’s comforts, conveniences, and advances, it has been responsible, in a significant way, for engineering regular physical activity and a number of other positive health behaviors out of people’s daily lives. A key question concerns how to harness information and communication technologies (ICT) to bring about positive changes in the health promotion field. One such approach involves community-engaged “citizen science,” in which local residents leverage the potential of ICT to foster data-driven consensus-building and mobilization efforts that advance physical activity at the individual, social, built environment, and policy levels. METHOD The history of citizen science in the research arena is briefly described and an evidence-based method that embeds citizen science in a multi-level, multi-sectoral community-based participatory research framework for physical activity promotion is presented. RESULTS Several examples of this citizen science-driven community engagement framework for promoting active lifestyles, called “Our Voice”, are discussed, including pilot projects from diverse communities in the U.S. as well as internationally. CONCLUSIONS The opportunities and challenges involved in leveraging citizen science activities as part of a broader population approach to promoting regular physical activity are explored. The strategic engagement of citizen scientists from socio-demographically diverse communities across the globe as both assessment as well as change agents provides a promising, potentially low-cost and scalable strategy for creating more active, healthful, and equitable neighborhoods and communities worldwide. PMID:27525309
NASA Astrophysics Data System (ADS)
Schofield, O.; McDonnell, J. D.; Kohut, J. T.; Glenn, S. M.
2016-02-01
Many regions of the ocean are exhibiting significant change, suggesting the need to develop effective focused education programs for a range of constituencies (K-12, undergraduate, and general public). We have been focused on developing a range of educational tools in a multi-pronged strategy built around using streaming data delivered through customized web services, focused undergraduate tiger teams, teacher training and video/documentary film-making. Core to the efforts is on engaging the undergraduate community by leveraging the data management tools of the U.S. Integrated Ocean Observing System (IOOS) and the education tools of the U.S. National Science Foundation's (NSF) Ocean Observing Initiative (OOI). These intuitive interactive browser-based tools reduce the barriers for student participation in sea exploration and discovery, and allowing them to become "field going" oceanographers while sitting at their desk. Those undergraduate student efforts complement efforts to improve educator and student engagement in ocean sciences through exposure to scientists and data. Through professional development and the creation of data tools, we will reduce the logistical costs of bringing ocean science to students in grades 6-16. We are providing opportunities to: 1) build capacity of scientists in communicating and engaging with diverse audiences; 2) create scalable, in-person and virtual opportunities for educators and students to engage with scientists and their research through data visualizations, data activities, educator workshops, webinars, and student research symposia. We are using a blended learning approach to promote partnerships and cross-disciplinary sharing. Finally we use data and video products to entrain public support through the development of science documentaries about the science and people who conduct it. For example Antarctic Edge is a feature length award-winning documentary about climate change that has garnered interest in movie theatres and on social media stores (NetFlix, ITunes). These combined efforts provide a range of products that all leverage off each other and provide a large suite of tools to bring the ocean to as many people as possible.
The role of scripts in personal consistency and individual differences.
Demorest, Amy; Popovska, Ana; Dabova, Milena
2012-02-01
This article examines the role of scripts in personal consistency and individual differences. Scripts are personally distinctive rules for understanding emotionally significant experiences. In 2 studies, scripts were identified from autobiographical memories of college students (Ns = 47 and 50) using standard categories of events and emotions to derive event-emotion compounds (e.g., Affiliation-Joy). In Study 1, scripts predicted responses to a reaction-time task 1 month later, such that participants responded more quickly to the event from their script when asked to indicate what emotion would be evoked by a series of events. In Study 2, individual differences in 5 common scripts were found to be systematically related to individual differences in traits of the Five-Factor Model. Distinct patterns of correlation revealed the importance of studying events and emotions in compound units, that is, in script form (e.g., Agreeableness was correlated with the script Affiliation-Joy but not with the scripts Fun-Joy or Affiliation-Love). © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.
Yanaoka, Kaichi
2016-02-01
This research examined the effects of planning and executive functions on young children's (ages 3-to 5-years) strategies in changing scripts. Young children (N = 77) performed a script task (doll task), three executive function tasks (DCCS, red/blue task, and nine box task), a planning task, and a receptive vocabulary task. In the doll task, young children first enacted a "changing clothes" script, and then faced a situation in which some elements of the script were inappropriate. They needed to enact a script by compensating inappropriate items for the other-script items or by changing to the other script in advance. The results showed that shifting, a factor of executive function, had a positive influence on whether young children could compensate inappropriate items. In addition, planning was also an important factor that helped children to change to the other script in advance. These findings suggest that shifting and planning play different roles in using the two strategies appropriately when young children enact scripts in unexpected situations.
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
Getting More Scientists to Revamp Teaching
ERIC Educational Resources Information Center
Vicens, Quentin; Caspersen, Michael E.
2014-01-01
Despite extensive evidence in favor of student-centered teaching practices over traditional lecturing, most science faculty do not embrace these modes of instruction. Professional development efforts are plentiful, but they can lack in impact or scalability, or both. The factors that determine professional development quality within a research…
Mitchell, Wayne; Breen, Colin; Entzeroth, Michael
2008-03-01
The Experimental Therapeutics Center (ETC) has been established at Biopolis to advance translational research by bridging the gap between discovery science and commercialization. We describe the Electronic Research Habitat at ETC, a comprehensive hardware and software infrastructure designed to effectively manage terabyte data flows and storage, increase back office efficiency, enhance the scientific work experience, and satisfy rigorous regulatory and legal requirements. Our habitat design is secure, scalable and robust, and it strives to embody the core values of the knowledge-based workplace, thus contributing to the strategic goal of building a "knowledge economy" in the context of Singapore's on-going biotechnology initiative.
ERIC Educational Resources Information Center
Department of Education, Washington, DC.
This booklet, written in Spanish, is intended to be used with a set of slides as part of a presentation to students on "How To Apply for Federal Student Aid" ("Como Solicitar la Asistencia Economica Federal para Estudiantes"). The first part of the book is a script based on the slides. After the script is a guide to hosting a financial aid…
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
2010-12-01
Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An
Awan, Omer Abdulrehman; van Wagenberg, Frans; Daly, Mark; Safdar, Nabile; Nagy, Paul
2011-04-01
Many radiology information systems (RIS) cannot accept a final report from a dictation reporting system before the exam has been completed in the RIS by a technologist. A radiologist can still render a report in a reporting system once images are available, but the RIS and ancillary systems may not get the results because of the study's uncompleted status. This delay in completing the study caused an alarming number of delayed reports and was undetected by conventional RIS reporting techniques. We developed a Web-based reporting tool to monitor uncompleted exams and automatically page section supervisors when a report was being delayed by its incomplete status in the RIS. Institutional Review Board exemption was obtained. At four imaging centers, a Python script was developed to poll the dictation system every 10 min for exams in five different modalities that were signed by the radiologist but could not be sent to the RIS. This script logged the exams into an existing Web-based tracking tool using PHP and a MySQL database. The script also text-paged the modality supervisor. The script logged the time at which the report was finally sent, and statistics were aggregated onto a separate Web-based reporting tool. Over a 1-year period, the average number of uncompleted exams per month and time to problem resolution decreased at every imaging center and in almost every imaging modality. Automated feedback provides a vital link in improving technologist performance and patient care without assigning a human resource to manage report queues.
NASA Astrophysics Data System (ADS)
Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David
2017-05-01
We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.
The Mechanics of CSCL Macro Scripts
ERIC Educational Resources Information Center
Dillenbourg, Pierre; Hong, Fabrice
2008-01-01
Macro scripts structure collaborative learning and foster the emergence of knowledge-productive interactions such as argumentation, explanations and mutual regulation. We propose a pedagogical model for the designing of scripts and illustrate this model using three scripts. In brief, a script disturbs the natural convergence of a team and in doing…
Script Reforms--Are They Necessary?
ERIC Educational Resources Information Center
James, Gregory
Script reform, the modification of an existing writing system, is often confused with script replacement of one writing system with another. Turkish underwent the replacement of Arabic script by an adaptation of Roman script under Kamel Ataturk, but a similar replacement in Persian was rejected because of the high rate of existing literacy in…
Page segmentation using script identification vectors: A first look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Cannon, M.; Kelly, P.
1997-07-01
Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less
Conversion of the Aeronautics Interactive Workstation
NASA Technical Reports Server (NTRS)
Riveras, Nykkita L.
2004-01-01
This summer I am working in the Educational Programs Office. My task is to convert the Aeronautics Interactive Workstation from a Macintosh (Mac) platform to a Personal Computer (PC) platform. The Aeronautics Interactive Workstation is a workstation in the Aerospace Educational Laboratory (AEL), which is one of the three components of the Science, Engineering, Mathematics, and Aerospace Academy (SEMAA). The AEL is a state-of-the-art, electronically enhanced, computerized classroom that puts cutting-edge technology at the fingertips of participating students. It provides a unique learning experience regarding aerospace technology that features activities equipped with aerospace hardware and software that model real-world challenges. The Aeronautics Interactive Workstation, in particular, offers a variety of activities pertaining to the history of aeronautics. When the Aeronautics Interactive Workstation was first implemented into the AEL it was designed with Macromedia Director 4 for a Mac. Today it is being converted to Macromedia DirectorMX2004 for a PC. Macromedia Director is the proven multimedia tool for building rich content and applications for CDs, DVDs, kiosks, and the Internet. It handles the widest variety of media and offers powerful features for building rich content that delivers red results, integrating interactive audio, video, bitmaps, vectors, text, fonts, and more. Macromedia Director currently offers two programmingkripting languages: Lingo, which is Director's own programmingkripting language and JavaScript. In the workstation, Lingo is used in the programming/scripting since it was the only language in use when the workstation was created. Since the workstation was created with an older version of Macromedia Director it hosted significantly different programming/scripting protocols. In order to successfully accomplish my task, the final product required correction of Xtra and programming/scripting errors. I also had to convert the Mac platform file extensions into compatible file extensions for a PC.
Atom-by-atom assembly of defect-free one-dimensional cold atom arrays.
Endres, Manuel; Bernien, Hannes; Keesling, Alexander; Levine, Harry; Anschuetz, Eric R; Krajenbrink, Alexandre; Senko, Crystal; Vuletic, Vladan; Greiner, Markus; Lukin, Mikhail D
2016-11-25
The realization of large-scale fully controllable quantum systems is an exciting frontier in modern physical science. We use atom-by-atom assembly to implement a platform for the deterministic preparation of regular one-dimensional arrays of individually controlled cold atoms. In our approach, a measurement and feedback procedure eliminates the entropy associated with probabilistic trap occupation and results in defect-free arrays of more than 50 atoms in less than 400 milliseconds. The technique is based on fast, real-time control of 100 optical tweezers, which we use to arrange atoms in desired geometric patterns and to maintain these configurations by replacing lost atoms with surplus atoms from a reservoir. This bottom-up approach may enable controlled engineering of scalable many-body systems for quantum information processing, quantum simulations, and precision measurements. Copyright © 2016, American Association for the Advancement of Science.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Bohn, Annette; Habermas, Tilmann
2016-01-01
This study examines predictions from two theories on the organisation of autobiographical memory: Cultural Life Script Theory which conceptualises the organisation of autobiographical memory by cultural schemata, and Transition Theory which proposes that people organise their memories in relation to personal events that changed the fabric of their daily lives, or in relation to negative collective public transitions, called the Living-in-History effect. Predictions from both theories were tested in forty-eight-old Germans from Berlin and Northern Germany. We tested whether the Living-in-History effect exists for both negative (the Second World War) and positive (Fall of Berlin Wall) collectively experienced events, and whether cultural life script events serve as a prominent strategy to date personal memories. Results showed a powerful, long-lasting Living-in History effect for the negative, but not the positive event. Berlin participants dated 26% of their memories in relation to the Second World War. Supporting cultural life script theory, life script events were frequently used to date personal memories. This provides evidence that people use a combination of culturally transmitted knowledge and knowledge based on personal experience to navigate through their autobiographical memories, and that experiencing war has a lasting impact on the organisation of autobiographical memories across the life span.
Trainable multiscript orientation detection
NASA Astrophysics Data System (ADS)
Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.
2010-01-01
Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.
2009-04-01
In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real-Time capabilities, privileging time-of-response instead of accuracy, b) Security services to support complex data policies and trust relationships, c) Interoperability with existing or planned infrastructures (e.g. e-Government, INSPIRE compliant, etc.). Actually these requirements are the main reason why CP applications differ from Earth Science applications. Therefore further research is required to design and implement an advanced e-Infrastructure satisfying those specific requirements. In particular five themes where further research is required were identified: Grid Infrastructure Enhancement, Advanced Middleware for CP Applications, Security and Data Policies, CP Applications Enablement, and Interoperability. For each theme several research topics were proposed and detailed. They are targeted to solve specific problems for the implementation of an effective operational European e-Infrastructure for CP applications.
Towards Big Earth Data Analytics: The EarthServer Approach
NASA Astrophysics Data System (ADS)
Baumann, Peter
2013-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.
A new open-source Python-based Space Weather data access, visualization, and analysis toolkit
NASA Astrophysics Data System (ADS)
de Larquier, S.; Ribeiro, A.; Frissell, N. A.; Spaleta, J.; Kunduri, B.; Thomas, E. G.; Ruohoniemi, J.; Baker, J. B.
2013-12-01
Space weather research relies heavily on combining and comparing data from multiple observational platforms. Current frameworks exist to aggregate some of the data sources, most based on file downloads via web or ftp interfaces. Empirical models are mostly fortran based and lack interfaces with more useful scripting languages. In an effort to improve data and model access, the SuperDARN community has been developing a Python-based Space Science Data Visualization Toolkit (DaViTpy). At the center of this development was a redesign of how our data (from 30 years of SuperDARN radars) was made available. Several access solutions are now wrapped into one convenient Python interface which probes local directories, a new remote NoSQL database, and an FTP server to retrieve the requested data based on availability. Motivated by the efficiency of this interface and the inherent need for data from multiple instruments, we implemented similar modules for other space science datasets (POES, OMNI, Kp, AE...), and also included fundamental empirical models with Python interfaces to enhance data analysis (IRI, HWM, MSIS...). All these modules and more are gathered in a single convenient toolkit, which is collaboratively developed and distributed using Github and continues to grow. While still in its early stages, we expect this toolkit will facilitate multi-instrument space weather research and improve scientific productivity.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
ERIC Educational Resources Information Center
Hull, Michael Malvern
2013-01-01
In the 1980's and 1990's, results from flurries of standardized exams (particularly in 4th and 8th grade mathematics and science) reached the attention of ever-growing numbers of Americans with an alarming message: our children are not even close to keeping up with those in China, Japan, and Korea. As a step towards improving American classrooms,…
[Script crossing in scanning electron microscopy].
Oehmichen, M; von Kortzfleisch, D; Hegner, B
1989-01-01
A case of mixed script in which ball point-pen ink was contaminated with typewriting prompted a survey of the literature and a systematic SEM study of mixed script with various writing instruments or inks. Mixed scripts produced with the following instruments or inks were investigated: pencil, ink/India ink, ball-pint pen, felt-tip pen, copied script and typewriter. This investigation showed SEM to be the method of choice for visualizing overlying scripts produced by different writing instruments or inks.
Formatting scripts with computers and Extended BASIC.
Menning, C B
1984-02-01
A computer program, written in the language of Extended BASIC, is presented which enables scripts, for educational media, to be quickly written in a nearly unformatted style. From the resulting script file, stored on magnetic tape or disk, the computer program formats the script into either a storyboard , a presentation, or a narrator 's script. Script headings and page and paragraph numbers are automatic features in the word processing. Suggestions are given for making personal modifications to the computer program.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
Measuring Social-Emotional Skills to Advance Science and Practice
ERIC Educational Resources Information Center
McKown, Clark; Russo-Ponsaran, Nicole; Johnson, Jason
2016-01-01
The ability to understand and effectively interact with others is a critical determinant of academic, social, and life success (DiPerna & Elliott, 2002). An area in particular need of scalable, feasible, usable, and scientifically sound assessment tools is social-emotional comprehension, which includes mental processes enlisted to encode,…
NASA Astrophysics Data System (ADS)
Štefanička, Tomáš; Ďuračiová, Renata; Seres, Csaba
2017-12-01
As a complex of buildings, the Faculty of Natural Sciences of the Comenius University in Bratislava tends to be difficult to navigate in spite of its size. An indoor navigation application could potentially save a lot of time and frustration. There are currently numerous technologies used in indoor navigation systems. Some of them focus on a high degree of precision and require significant financial investment; others provide only static information about a current location. In this paper we focused on the determination of an approximate location using inertial measurement systems available on most smartphones, i.e., a gyroscope and an accelerometer. The actual position of the device was calculated using "a walk detection method" based on a delayed lack of motion. We have developed an indoor navigation application that relies solely on open source JavaScript libraries to visualize the interior of the building and calculate the shortest path utilizing Dijsktra's routing algorithm. The application logic is located on the client side, so the software is able to work offline. Our solution represents an accessible lowcost and platform-independent web application that can significantly improve navigation at the Faculty of Natural Sciences. Although our application has been developed on a specific building complex, it could be used in other interiors as well.
[Preliminary application of scripting in RayStation TPS system].
Zhang, Jianying; Sun, Jing; Wang, Yun
2013-07-01
Discussing the basic application of scripting in RayStation TPS system. On the RayStation 3.0 Platform, the programming methods and the points should be considered during basic scripting application were explored with the help of utility scripts. The typical planning problems in the field of beam arrangement and plan outputting were used as examples by ironprthon language. The necessary properties and the functions of patient object for script writing can be extracted from RayStation system. With the help of NET controls, planning functions such as the interactive parameter input, treatment planning control and the extract of the plan have been realized by scripts. With the help of demo scripts, scripts can be developed in RayStation, as well as the system performance can be upgraded.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions
Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.
Asymmetric bias in perception of facial affect among Roman and Arabic script readers.
Heath, Robin L; Rouhana, Aida; Ghanem, Dana Abi
2005-01-01
The asymmetric chimeric faces test is used frequently as an indicator of right hemisphere involvement in the perception of facial affect, as the test is considered free of linguistic elements. Much of the original research with the asymmetric chimeric faces test was conducted with subjects reading left-to-right Roman script, i.e., English. As readers of right-to-left scripts, such as Arabic, demonstrated a mixed or weak rightward bias in judgements of facial affect, the influence of habitual scanning direction was thought to intersect with laterality. We administered the asymmetric chimeric faces test to 1239 adults who represented a range of script experience, i.e., Roman script readers (English and French), Arabic readers, bidirectional readers of Roman and Arabic scripts, and illiterates. Our findings supported the hypothesis that the bias in facial affect judgement is rooted in laterality, but can be influenced by script direction. Specifically, right-handed readers of Roman script demonstrated the greatest mean leftward score, and mixed-handed Arabic script readers demonstrated the greatest mean rightward score. Biliterates showed a gradual shift in asymmetric perception, as their scores fell between those of Roman and Arabic script readers, basically distributed in the order expected by their handedness and most often used script. Illiterates, whose only directional influence was laterality, showed a slight leftward bias.
Disruptive innovations for designing and diffusing evidence-based interventions.
Rotheram-Borus, Mary Jane; Swendeman, Dallas; Chorpita, Bruce F
2012-09-01
Evidence-based therapeutic and preventive intervention programs (EBIs) have been growing exponentially. Yet EBIs have not been broadly adopted in the United States. In order for our EBI science to significantly reduce disease burden, we need to critically reexamine our scientific conventions and norms. Innovation may be spurred by reexamining the traditional biomedical model for validating, implementing, and diffusing EBI products and science. The model of disruptive innovations suggests that we reengineer EBIs on the basis of their most robust features in order to serve more people in less time and at lower cost. A disruptive innovation provides a simpler and less expensive alternative that meets the essential needs for the majority of consumers and is more accessible, scalable, replicable, and sustainable. Examples of disruptive innovations from other fields include minute clinics embedded in retail chain drug stores, $2 generic eyeglasses, automated teller machines, and telemedicine. Four new research approaches will be required to support disruptive innovations in EBI science: synthesize common elements across EBIs; experiment with new delivery formats (e.g., consumer controlled, self-directed, brief, paraprofessional, coaching, and technology and media strategies); adopt market strategies to promote and diffuse EBI science, knowledge, and products; and adopt continuous quality improvement as a research paradigm for systematically improving EBIs, based on ongoing monitoring data and feedback. EBI science can have more impact if it can better leverage what we know from existing EBIs in order to inspire, engage, inform, and support families and children to adopt and sustain healthy daily routines and lifestyles. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, Joern; Linev, Sergey
2015-12-01
The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.
Model-Based GUI Testing Using Uppaal at Novo Nordisk
NASA Astrophysics Data System (ADS)
Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.
ChemDoodle Web Components: HTML5 toolkit for chemical graphics, interfaces, and informatics.
Burger, Melanie C
2015-01-01
ChemDoodle Web Components (abbreviated CWC, iChemLabs, LLC) is a light-weight (~340 KB) JavaScript/HTML5 toolkit for chemical graphics, structure editing, interfaces, and informatics based on the proprietary ChemDoodle desktop software. The library uses
autokonf - A Configuration Script Generator Implemented in Perl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reus, J F
This paper discusses configuration scripts in general and the scripting language issues involved. A brief description of GNU autoconf is provided along with a contrasting overview of autokonf, a configuration script generator implemented in Perl, whose macros are implemented in Perl, generating a configuration script in Perl. It is very portable, easily extensible, and readily mastered.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J
2013-07-01
Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.
Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning
Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof
2013-01-01
This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal collaboration scripts of the participating learners. Furthermore, it explains how internal collaboration scripts develop through participation in CSCL practices. It emphasizes the importance of active application of subject matter knowledge in CSCL practices, and it prioritizes transactive over nontransactive forms of knowledge application in order to facilitate learning. Further, the theory explains how external collaboration scripts modify CSCL practices and how they influence the development of internal collaboration scripts. The principles specify an optimal scaffolding level for external collaboration scripts and allow for the formulation of hypotheses about the fading of external collaboration scripts. Finally, the article points toward conceptual challenges and future research questions. PMID:23378679
A new image representation for compact and secure communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Lakshman; Skourikhine, A. N.
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less
Judicious use of custom development in an open source component architecture
NASA Astrophysics Data System (ADS)
Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.
2014-12-01
Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.
ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments
NASA Astrophysics Data System (ADS)
Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.
2017-12-01
The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access
Creating a Podcast/Vodcast: A How-To Approach
NASA Astrophysics Data System (ADS)
Petersen, C. C.
2011-09-01
Creating podcasts and vodcasts is a wonderful way to share news of science research. Public affairs officers use them to reveal the latest discoveries done by scientists in their institutions. Educators can offer podcast/vodcast creation for students who want a unique way to demonstrate their mastery of science topics. Anyone with a computer and a USB microphone can create a podcast. To do a vodcast, you also need a digital video camera and video editing software. This session focused mainly on creating a podcast - writing the script and recording the soundtrack. Attendees also did a short activity to learn to write effective narrative copy for a podcast/vodcast.
Integrated web system of geospatial data services for climate research
NASA Astrophysics Data System (ADS)
Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander
2016-04-01
Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.
Dynamic online surveys and experiments with the free open-source software dynQuest.
Rademacher, Jens D M; Lippke, Sonia
2007-08-01
With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
Biometric identification: a holistic perspective
NASA Astrophysics Data System (ADS)
Nadel, Lawrence D.
2007-04-01
Significant advances continue to be made in biometric technology. However, the global war on terrorism and our increasingly electronic society have created the societal need for large-scale, interoperable biometric capabilities that challenge the capabilities of current off-the-shelf technology. At the same time, there are concerns that large-scale implementation of biometrics will infringe our civil liberties and offer increased opportunities for identity theft. This paper looks beyond the basic science and engineering of biometric sensors and fundamental matching algorithms and offers approaches for achieving greater performance and acceptability of applications enabled with currently available biometric technologies. The discussion focuses on three primary biometric system aspects: performance and scalability, interoperability, and cost benefit. Significant improvements in system performance and scalability can be achieved through careful consideration of the following elements: biometric data quality, human factors, operational environment, workflow, multibiometric fusion, and integrated performance modeling. Application interoperability hinges upon some of the factors noted above as well as adherence to interface, data, and performance standards. However, there are times when the price of conforming to such standards can be decreased local system performance. The development of biometric performance-based cost benefit models can help determine realistic requirements and acceptable designs.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic
NASA Technical Reports Server (NTRS)
Ritter, George; Pedoto, Ramon
2010-01-01
This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.
Littleton, Heather L; Dodd, Julia C
2016-02-25
Scripts are influential in shaping sexual behaviors. Prior studies have examined the influence of individuals' rape scripts. However, these scripts have not been evaluated among diverse groups. The current study examined the rape scripts of African American (n = 72) and European American (n = 99) college women. Results supported three rape scripts: the "real rape," the "party rape," and the mismatched intentions rape, that were equally common. However, there were some differences, with African Americans' narratives more often including active victim resistance and less often containing victim vulnerability themes. Societal and cultural influences on rape scripts are discussed. © The Author(s) 2016.
2012-08-16
This patch represents the essential elements associated with pressurized Earth science research aboard the International Space Station. At the top of the patch Klingon script spells out the acronym WORF making reference to the famed Star Trek character of the same name. In doing so it attests to the foresight, honor, integrity, and persistence of all those who made the WORF possible. To the right of the Klingon script is a single four pointed star in the form of a cross to honor the late Dr. Jack Estes and Dr. Dave Amsbury, the individuals most responsible for seeing to it that an optical quality, Earth science research window was added to the United States laboratory module, Destiny. The "flying eyeball" represents the ability of the ISS to allow scientists and astronauts to make and record continuous observations of natural and manmade processes on the surface of the Earth. The Destiny laboratory is depicted on the right of the patch above the Flag of the United States of America and highlights the position of the nadir looking, optical quality, science window in the module. The light emanating from the window from the lighted interior of the module appropriately illuminates the National Ensign for display during both day and night time. In the center of the patch, below the flying eyeball is a graphic representation of the WORF rack. A science instrument is mounted on the WORF payload shelf and is recording data of the Earth's surface through the nadir looking, science window over which the WORF rack is mounted. An astronaut represented by Mario Runco Jr., a designer, developer, and manager of the WORF and depicted as Star Trek's Mr. Spock, is to the left of the WORF rack and is shown in his flight suit with his STS-44 mission patch operating an imaging instrument, emphasizing the importance of astronaut participation to achieve the maximum scientific return from orbital research.
Dominant heterosexual sexual scripts in emerging adulthood: conceptualization and measurement.
Sakaluk, John K; Todd, Leah M; Milhausen, Robin; Lachowsky, Nathan J
2014-01-01
Sexual script research (Simon & Gagnon 1969 , 1986 ) bourgeoned following Simon and Gagnon's groundbreaking work. Empirical measurement of sexual script adherence has been limited, however, as no measures exist that have undergone rigorous development and validation. We conducted three studies to examine current dominant sexual scripts of heterosexual adults and to develop a measure of endorsement of these scripts. In Study 1, we conducted three focus groups of men ( n = 19) and four of women ( n = 20) to discuss the current scripts governing sexual behavior. Results supported scripts for sex drive, physical and emotional sex, sexual performance, initiation and gatekeeping, and evaluation of sexual others. In Study 2, we used these qualitative findings to develop a measure of script endorsement, the Sexual Script Scale. Factor analysis of data from 721 participants revealed six interrelated factors demonstrating initial construct validity. In Study 3, confirmatory factor analysis of a separate sample of 289 participants supported the model from Study 2, and evidence of factorial invariance and test-retest reliability was obtained. This article presents the results of these studies, documenting the process of scale development from formative research through to confirmatory testing, and suggests future directions for the continued development of sexual scripting theory.
Gold Digger or Video Girl: the salience of an emerging hip-hop sexual script.
Ross, Jasmine N; Coleman, Nicole M
2011-02-01
Concerns have been expressed in the common discourse and scholarly literature about the negative influence of Hip-Hop on its young listeners' ideas about sex and sexuality. Most of the scholarly literature has focused on the impact of this urban, Black media on young African American girls' sexual self-concept and behaviours. In response to this discourse, Stephens and Phillips (2003) proposed a Hip-Hop sexual scripting model that theorises about specific sexual scripts for young African American women. Their model includes eight different sexual scripts including the Gold Digger script. The present study proposes a ninth emerging script - the Video Girl. Participants were 18 female African American college students, between the ages of 18 and 30 years old from a large urban public university in the Southwest USA. Using q-methodology the present study found support for the existence of a Video Girl script. In addition, the data indicates that this script is distinct but closely related to Stephens and Phillips' Gold Digger script. These findings support their theory by suggesting that Hip-Hop sexual scripts are salient and hold real meaning for this sample.
Schiff, Rachel
2012-12-01
The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.
Promoting Interests in Atmospheric Science at a Liberal Arts Institution
NASA Astrophysics Data System (ADS)
Roussev, S.; Sherengos, P. M.; Limpasuvan, V.; Xue, M.
2007-12-01
Coastal Carolina University (CCU) students in Computer Science participated in a project to set up an operational weather forecast for the local community. The project involved the construction of two computing clusters and the automation of daily forecasting. Funded by NSF-MRI, two high-performance clusters were successfully established to run the University of Oklahoma's Advance Regional Prediction System (ARPS). Daily weather predictions are made over South Carolina and North Carolina at 3-km horizontal resolution (roughly 1.9 miles) using initial and boundary condition data provided by UNIDATA. At this high resolution, the model is cloud- resolving, thus providing detailed picture of heavy thunderstorms and precipitation. Forecast results are displayed on CCU's website (https://marc.coastal.edu/HPC) to complement observations at the National Weather Service in Wilmington N.C. Present efforts include providing forecasts at 1-km resolution (or finer), comparisons with other models like Weather Research and Forecasting (WRF) model, and the examination of local phenomena (like water spouts and tornadoes). Through these activities the students learn about shell scripting, cluster operating systems, and web design. More importantly, students are introduced to Atmospheric Science, the processes involved in making weather forecasts, and the interpretation of their forecasts. Simulations generated by the forecasts will be integrated into the contents of CCU's course like Fluid Dynamics, Atmospheric Sciences, Atmospheric Physics, and Remote Sensing. Operated jointly between the departments of Applied Physics and Computer Science, the clusters are expected to be used by CCU faculty and students for future research and inquiry-based projects in Computer Science, Applied Physics, and Marine Science.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial.
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-06-01
Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-01-01
Data science or “data-driven research” is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves. PMID:27258286
Strasser, Josef; Gruber, Hans
2015-05-01
An important part of learning processes in the professional development of counselors is the integration of declarative knowledge and professional experience. It was investigated in-how-far mental health counselors at different levels of expertise (experts, intermediates, novices) differ in their availability of experience-based knowledge structures. Participants were prompted with 20 client problems. They had to explain those problems, the explanations were analyzed using think-aloud protocols. The results show that experts' knowledge is organized in script-like structures that integrate declarative knowledge and professional experience and help experts in accessing relevant information about cases. Novices revealed less integrated knowledge structures. It is concluded that knowledge restructuring and illness script formation are crucial parts of the professional learning of counselors.
Quantum propagation and confinement in 1D systems using the transfer-matrix method
NASA Astrophysics Data System (ADS)
Pujol, Olivier; Carles, Robert; Pérez, José-Philippe
2014-05-01
The aim of this article is to provide some Matlab scripts to the teaching community in quantum physics. The scripts are based on the transfer-matrix formalism and offer a very efficient and versatile tool to solve problems of a physical object (electron, proton, neutron, etc) with one-dimensional (1D) stationary potential energy. Resonant tunnelling through a multiple-barrier or confinement in wells of various shapes is particularly analysed. The results are quantitatively discussed with semiconductor heterostructures, harmonic and anharmonic molecular vibrations, or neutrons in a gravity field. Scripts and other examples (hydrogen-like ions and transmission by a smooth variation of potential energy) are available freely at http://www-loa.univ-lille1.fr/˜pujol in three languages: English, French and Spanish.
Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj
2016-01-01
To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.
Yu, Zhengyang; Zheng, Shusen; Chen, Huaiqing; Wang, Jianjun; Xiong, Qingwen; Jing, Wanjun; Zeng, Yu
2006-10-01
This research studies the process of dynamic concision and 3D reconstruction from medical body data using VRML and JavaScript language, focuses on how to realize the dynamic concision of 3D medical model built with VRML. The 2D medical digital images firstly are modified and manipulated by 2D image software. Then, based on these images, 3D mould is built with VRML and JavaScript language. After programming in JavaScript to control 3D model, the function of dynamic concision realized by Script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be formed in high quality near to those got in traditional methods. By this way, with the function of dynamic concision, VRML browser can offer better windows of man-computer interaction in real time environment than before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and has a promising prospect in the fields of medical image.