Sample records for simple scalable script-based

  1. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.

    PubMed

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/

  2. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop

    PubMed Central

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054

  3. A Simple, Scalable, Script-based Science Processor

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2004-01-01

    The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.

  4. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  5. Simple, Scalable, Script-Based Science Processor (S4P)

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.

  6. SPV: a JavaScript Signaling Pathway Visualizer.

    PubMed

    Calderone, Alberto; Cesareni, Gianni

    2018-03-24

    The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.

  7. Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.

    2005-12-01

    The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.

  8. COMP Superscalar, an interoperable programming framework

    NASA Astrophysics Data System (ADS)

    Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul

    2015-12-01

    COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.

  9. Evolution of Information Management at the GSFC Earth Sciences (GES) Data and Information Services Center (DISC): 2006-2007

    NASA Technical Reports Server (NTRS)

    Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen

    2009-01-01

    Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007

  10. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.

  11. Reusing Information Management Services for Recommended Decadal Study Missions to Facilitate Aerosol and Cloud Studies

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Alcott, Gary; Lynnes, Chris; Leptoukh, Greg; Vollmer, Bruce; Berrick, Steve

    2008-01-01

    NASA Earth Sciences Division (ESD) has made great investments in the development and maintenance of data management systems and information technologies, to maximize the use of NASA generated Earth science data. With information management system infrastructure in place, mature and operational, very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) and the reusability for these future missions. The GES DISC has developed a series of modular, reusable data management components currently in use. They include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. Information management system components are based on atmospheric scientist inputs. Large development and maintenance cost savings can be realized through their reuse in future missions.

  12. Embedding Fonts in MetaPost Output

    DTIC Science & Technology

    2016-04-19

    by John Hobby ) based on Donald Knuth’s META- FONT [4] with high quality PostScript output. An outstanding feature of MetaPost is that typeset fonts in...output, the graphics are perfectly scalable to any arbitrary res- olution. John Hobby , its author, writes: “[MetaPost] is really a programming lan- guage...for generating graphics, especially fig- ures for TEX [5] and troff documents.” This quote by Hobby indicates that MetaPost figures are not only

  13. phylo-node: A molecular phylogenetic toolkit using Node.js.

    PubMed

    O'Halloran, Damien M

    2017-01-01

    Node.js is an open-source and cross-platform environment that provides a JavaScript codebase for back-end server-side applications. JavaScript has been used to develop very fast and user-friendly front-end tools for bioinformatic and phylogenetic analyses. However, no such toolkits are available using Node.js to conduct comprehensive molecular phylogenetic analysis. To address this problem, I have developed, phylo-node, which was developed using Node.js and provides a stable and scalable toolkit that allows the user to perform diverse molecular and phylogenetic tasks. phylo-node can execute the analysis and process the resulting outputs from a suite of software options that provides tools for read processing and genome alignment, sequence retrieval, multiple sequence alignment, primer design, evolutionary modeling, and phylogeny reconstruction. Furthermore, phylo-node enables the user to deploy server dependent applications, and also provides simple integration and interoperation with other Node modules and languages using Node inheritance patterns, and a customized piping module to support the production of diverse pipelines. phylo-node is open-source and freely available to all users without sign-up or login requirements. All source code and user guidelines are openly available at the GitHub repository: https://github.com/dohalloran/phylo-node.

  14. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Astrophysics Data System (ADS)

    Berrick, S. W.; Lynnes, C.

    2007-12-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.

  15. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  16. A user-friendly tool to transform large scale administrative data into wide table format using a MapReduce program with a Pig Latin based script.

    PubMed

    Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko

    2012-12-22

    Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.

  17. Scalable and expressive medical terminologies.

    PubMed

    Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J

    1996-01-01

    The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.

  18. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    NASA Astrophysics Data System (ADS)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  19. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.

  20. Earth Science Mining Web Services

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.

    2008-12-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  1. Earth Science Mining Web Services

    NASA Technical Reports Server (NTRS)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  2. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components

  3. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  4. UPIC: Perl scripts to determine the number of SSR markers to run

    USDA-ARS?s Scientific Manuscript database

    We have developed Perl Scripts for the cost-effective planning of fingerprinting and genotyping experiments. The UPIC scripts detect the best combination of polymorphic simple sequence repeat (SSR) markers and provide coefficients of the amount of information obtainable (number of alleles of patter...

  5. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Technical Reports Server (NTRS)

    Berrick, Stephen; Lynnes, Christopher

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement

  6. A simple microviscometric approach based on Brownian motion tracking.

    PubMed

    Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan

    2015-02-01

    Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).

  7. HiDi: an efficient reverse engineering schema for large-scale dynamic regulatory network reconstruction using adaptive differentiation.

    PubMed

    Deng, Yue; Zenil, Hector; Tegnér, Jesper; Kiani, Narsis A

    2017-12-15

    The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes. The Matlab code of the HiDi implementation is available at: www.complexitycalculator.com/HiDiScript.zip. hzenilc@gmail.com or narsis.kiani@ki.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Influence of learner knowledge and case complexity on handover accuracy and cognitive load: results from a simulation study.

    PubMed

    Young, John Q; van Dijk, Savannah M; O'Sullivan, Patricia S; Custers, Eugene J; Irby, David M; Ten Cate, Olle

    2016-09-01

    The handover represents a high-risk event in which errors are common and lead to patient harm. A better understanding of the cognitive mechanisms of handover errors is essential to improving handover education and practice. This paper reports on an experiment conducted to study the effects of learner knowledge, case complexity (i.e. cases with or without a clear diagnosis) and their interaction on handover accuracy and cognitive load. Participants were 52 Dutch medical students in Years 2 and 6. The experiment employed a repeated-measures design with two explanatory variables: case complexity (simple or complex) as the within-subject variable, and learner knowledge (as indicated by illness script maturity) as the between-subject covariate. The dependent variables were handover accuracy and cognitive load. Each participant performed a total of four simulated handovers involving two simple cases and two complex cases. Higher illness script maturity predicted increased handover accuracy (p < 0.001) and lower cognitive load (p = 0.007). Case complexity did not independently affect either outcome. For handover accuracy, there was no interaction between case complexity and illness script maturity. For cognitive load, there was an interaction effect between illness script maturity and case complexity, indicating that more mature illness scripts reduced cognitive load less in complex cases than in simple cases. Students with more mature illness scripts performed more accurate handovers and experienced lower cognitive load. For cognitive load, these effects were more pronounced in simple than complex cases. If replicated, these findings suggest that handover curricula and protocols should provide support that varies according to the knowledge of the trainee. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  9. Design and development of a web-based application for diabetes patient data management.

    PubMed

    Deo, S S; Deobagkar, D N; Deobagkar, Deepti D

    2005-01-01

    A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.

  10. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  11. Simple proteomics data analysis in the object-oriented PowerShell.

    PubMed

    Mohammed, Yassene; Palmblad, Magnus

    2013-01-01

    Scripting languages such as Perl and Python are appreciated for solving simple, everyday tasks in bioinformatics. A more recent, object-oriented command shell and scripting language, Windows PowerShell, has many attractive features: an object-oriented interactive command line, fluent navigation and manipulation of XML files, ability to consume Web services from the command line, consistent syntax and grammar, rich regular expressions, and advanced output formatting. The key difference between classical command shells and scripting languages, such as bash, and object-oriented ones, such as PowerShell, is that in the latter the result of a command is a structured object with inherited properties and methods rather than a simple stream of characters. Conveniently, PowerShell is included in all new releases of Microsoft Windows and therefore already installed on most computers in classrooms and teaching labs. In this chapter we demonstrate how PowerShell in particular allows easy interaction with mass spectrometry data in XML formats, connection to Web services for tools such as BLAST, and presentation of results as formatted text or graphics. These features make PowerShell much more than "yet another scripting language."

  12. EntrezAJAX: direct web browser access to the Entrez Programming Utilities.

    PubMed

    Loman, Nicholas J; Pallen, Mark J

    2010-06-21

    Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  13. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  14. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  15. Simple sequence repeat marker loci discovery using SSR primer.

    PubMed

    Robinson, Andrew J; Love, Christopher G; Batley, Jacqueline; Barker, Gary; Edwards, David

    2004-06-12

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. With the increase in the availability of DNA sequence information, an automated process to identify and design PCR primers for amplification of SSR loci would be a useful tool in plant breeding programs. We report an application that integrates SPUTNIK, an SSR repeat finder, with Primer3, a PCR primer design program, into one pipeline tool, SSR Primer. On submission of multiple FASTA formatted sequences, the script screens each sequence for SSRs using SPUTNIK. The results are parsed to Primer3 for locus-specific primer design. The script makes use of a Web-based interface, enabling remote use. This program has been written in PERL and is freely available for non-commercial users by request from the authors. The Web-based version may be accessed at http://hornbill.cspp.latrobe.edu.au/

  16. Detecting very low allele fraction variants using targeted DNA sequencing and a novel molecular barcode-aware variant caller.

    PubMed

    Xu, Chang; Nezami Ranjbar, Mohammad R; Wu, Zhong; DiCarlo, John; Wang, Yexun

    2017-01-03

    Detection of DNA mutations at very low allele fractions with high accuracy will significantly improve the effectiveness of precision medicine for cancer patients. To achieve this goal through next generation sequencing, researchers need a detection method that 1) captures rare mutation-containing DNA fragments efficiently in the mix of abundant wild-type DNA; 2) sequences the DNA library extensively to deep coverage; and 3) distinguishes low level true variants from amplification and sequencing errors with high accuracy. Targeted enrichment using PCR primers provides researchers with a convenient way to achieve deep sequencing for a small, yet most relevant region using benchtop sequencers. Molecular barcoding (or indexing) provides a unique solution for reducing sequencing artifacts analytically. Although different molecular barcoding schemes have been reported in recent literature, most variant calling has been done on limited targets, using simple custom scripts. The analytical performance of barcode-aware variant calling can be significantly improved by incorporating advanced statistical models. We present here a highly efficient, simple and scalable enrichment protocol that integrates molecular barcodes in multiplex PCR amplification. In addition, we developed smCounter, an open source, generic, barcode-aware variant caller based on a Bayesian probabilistic model. smCounter was optimized and benchmarked on two independent read sets with SNVs and indels at 5 and 1% allele fractions. Variants were called with very good sensitivity and specificity within coding regions. We demonstrated that we can accurately detect somatic mutations with allele fractions as low as 1% in coding regions using our enrichment protocol and variant caller.

  17. An OpenMI Implementation of a Water Resources System using Simple Script Wrappers

    NASA Astrophysics Data System (ADS)

    Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.

    2013-12-01

    This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.

  18. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing

    PubMed Central

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R.; Han, InTaek; Yun, Dong-Jin

    2015-01-01

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements. PMID:26490133

  19. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing.

    PubMed

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin

    2015-10-22

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.

  20. Scripted messages delivered by nurses and radio changed beliefs, attitudes, intentions, and behaviors regarding infant and young child feeding in Mexico.

    PubMed

    Monterrosa, Eva C; Frongillo, Edward A; González de Cossío, Teresa; Bonvecchio, Anabelle; Villanueva, Maria Angeles; Thrasher, James F; Rivera, Juan A

    2013-06-01

    Scalable interventions are needed to improve infant and young child feeding (IYCF). We evaluated whether an IYCF nutrition communication strategy using radio and nurses changed beliefs, attitudes, social norms, intentions, and behaviors related to breastfeeding (BF), dietary diversity, and food consistency. Women with children 6-24 mo were randomly selected from 6 semi-urban, low-income communities in the Mexican state of Morelos (intervention, n = 266) and from 3 comparable communities in Puebla (control, n = 201). Nurses delivered only once 5 scripted messages: BF, food consistency, flesh-food and vegetable consumption, and feed again if food was rejected; these same messages aired 7 times each day on 3 radio stations for 21 d. The control communities were not exposed to scripted messages via nurse and radio. We used a pre-/post-test design to evaluate changes in beliefs, attitudes, norms, and intentions as well as change in behavior with 7-d food frequency questions. Mixed models were used to examine intervention-control differences in pre-/post changes. Coverage was 87% for the nurse component and 34% for radio. Beliefs, attitudes, and intention, but not social norms, about IYCF significantly improved in the intervention communities compared with control. Significant pre-/post changes in the intervention communities compared with control were reported for BF frequency (3.7 ± 0.6 times/d), and consumption of vegetables (0.6 ± 0.2 d) and beef (0.2 ± 0.1 d) and thicker consistency of chicken (0.6 ± 0.2 d) and vegetable broths (0.8 ± 0.4 d). This study provides evidence that a targeted communication strategy using a scalable model significantly improves IYCF.

  1. Dynamic alarm response procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, J.; Gordon, P.; Fitch, K.

    2006-07-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphicsmore » (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)« less

  2. NGL Viewer: Web-based molecular graphics for large complexes.

    PubMed

    Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W

    2018-05-29

    The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.

  3. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  4. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  5. Experimental research control software system

    NASA Astrophysics Data System (ADS)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  6. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    PubMed Central

    2010-01-01

    Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/ PMID:20565938

  7. Information Security Considerations for Applications Using Apache Accumulo

    DTIC Science & Technology

    2014-09-01

    Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java

  8. Scalable imprinting of shape-specific polymeric nanocarriers using a release layer of switchable water solubility.

    PubMed

    Agarwal, Rachit; Singh, Vikramjit; Jurney, Patrick; Shi, Li; Sreenivasan, S V; Roy, Krishnendu

    2012-03-27

    There is increasing interest in fabricating shape-specific polymeric nano- and microparticles for efficient delivery of drugs and imaging agents. The size and shape of these particles could significantly influence their transport properties and play an important role in in vivo biodistribution, targeting, and cellular uptake. Nanoimprint lithography methods, such as jet-and-flash imprint lithography (J-FIL), provide versatile top-down processes to fabricate shape-specific, biocompatible nanoscale hydrogels that can deliver therapeutic and diagnostic molecules in response to disease-specific cues. However, the key challenges in top-down fabrication of such nanocarriers are scalable imprinting with biological and biocompatible materials, ease of particle-surface modification using both aqueous and organic chemistry as well as simple yet biocompatible harvesting. Here we report that a biopolymer-based sacrificial release layer in combination with improved nanocarrier-material formulation can address these challenges. The sacrificial layer improves scalability and ease of imprint-surface modification due to its switchable solubility through simple ion exchange between monovalent and divalent cations. This process enables large-scale bionanoimprinting and efficient, one-step harvesting of hydrogel nanoparticles in both water- and organic-based imprint solutions. © 2012 American Chemical Society

  9. Viewing multiple sequence alignments with the JavaScript Sequence Alignment Viewer (JSAV)

    PubMed Central

    Martin, Andrew C. R.

    2014-01-01

    The JavaScript Sequence Alignment Viewer (JSAV) is designed as a simple-to-use JavaScript component for displaying sequence alignments on web pages. The display of sequences is highly configurable with options to allow alternative coloring schemes, sorting of sequences and ’dotifying’ repeated amino acids. An option is also available to submit selected sequences to another web site, or to other JavaScript code. JSAV is implemented purely in JavaScript making use of the JQuery and JQuery-UI libraries. It does not use any HTML5-specific options to help with browser compatibility. The code is documented using JSDOC and is available from http://www.bioinf.org.uk/software/jsav/. PMID:25653836

  10. Viewing multiple sequence alignments with the JavaScript Sequence Alignment Viewer (JSAV).

    PubMed

    Martin, Andrew C R

    2014-01-01

    The JavaScript Sequence Alignment Viewer (JSAV) is designed as a simple-to-use JavaScript component for displaying sequence alignments on web pages. The display of sequences is highly configurable with options to allow alternative coloring schemes, sorting of sequences and 'dotifying' repeated amino acids. An option is also available to submit selected sequences to another web site, or to other JavaScript code. JSAV is implemented purely in JavaScript making use of the JQuery and JQuery-UI libraries. It does not use any HTML5-specific options to help with browser compatibility. The code is documented using JSDOC and is available from http://www.bioinf.org.uk/software/jsav/.

  11. Benchmarking gate-based quantum computers

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  12. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    PubMed

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  13. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    PubMed

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  14. A generic interface to reduce the efficiency-stability-cost gap of perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Hou, Yi; Du, Xiaoyan; Scheiner, Simon; McMeekin, David P.; Wang, Zhiping; Li, Ning; Killian, Manuela S.; Chen, Haiwei; Richter, Moses; Levchuk, Ievgen; Schrenker, Nadine; Spiecker, Erdmann; Stubhan, Tobias; Luechinger, Norman A.; Hirsch, Andreas; Schmuki, Patrik; Steinrück, Hans-Peter; Fink, Rainer H.; Halik, Marcus; Snaith, Henry J.; Brabec, Christoph J.

    2017-12-01

    A major bottleneck delaying the further commercialization of thin-film solar cells based on hybrid organohalide lead perovskites is interface loss in state-of-the-art devices. We present a generic interface architecture that combines solution-processed, reliable, and cost-efficient hole-transporting materials without compromising efficiency, stability, or scalability of perovskite solar cells. Tantalum-doped tungsten oxide (Ta-WOx)/conjugated polymer multilayers offer a surprisingly small interface barrier and form quasi-ohmic contacts universally with various scalable conjugated polymers. In a simple device with regular planar architecture and a self-assembled monolayer, Ta-WOx-doped interface-based perovskite solar cells achieve maximum efficiencies of 21.2% and offer more than 1000 hours of light stability. By eliminating additional ionic dopants, these findings open up the entire class of organics as scalable hole-transporting materials for perovskite solar cells.

  15. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-07-08

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.

  16. Scalable Production of Graphene-Based Wearable E-Textiles.

    PubMed

    Karim, Nazmul; Afroj, Shaila; Tan, Sirui; He, Pei; Fernando, Anura; Carr, Chris; Novoselov, Kostya S

    2017-12-26

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors.

  17. Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis

    DTIC Science & Technology

    2014-10-01

    test events, quality of service and other key metrics of military systems and networks are evaluated. Network data captured in standard flow formats...mentioned here. The Ozone Widget Framework (Next Century, n.d.) has proven to be very useful. Also, an extensive, clean, and optimized JavaScript ...library for visualizing many types of data can be found in D3–Data Driven Documents (Bostock, 2013). Quality of Service from Flow Two essential metrics of

  18. Scripting MODFLOW model development using Python and FloPy

    USGS Publications Warehouse

    Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.

    2016-01-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.

  19. Simplifying Chandra aperture photometry with srcflux

    NASA Astrophysics Data System (ADS)

    Glotfelty, Kenny

    2014-11-01

    This poster will highlight some of the features of the srcflux script in CIAO. This script combines many threads and tools together to compute photometric properties for sources: counts, rates, various fluxes, and confidence intervals or upper limits. Beginning and casual X-ray astronomers greatly benefit from the simple interface: just specify the event file and a celestial location, while power-users and X-ray astronomy experts can take advantage of the all the parameters to automatically produce catalogs for entire fields. Current limitations and future enhancements of the script will also be presented.

  20. A Simple Modeling Tool and Exercises for Incoming Solar Radiation Demonstrations

    ERIC Educational Resources Information Center

    Werts, Scott; Hinnov, Linda

    2011-01-01

    We present a MATLAB script INSOLATE.m that calculates insolation at the top of the atmosphere and the total amount of daylight during the year (and other quantities) with respect to geographic latitude and Earth's obliquity (axial tilt). The script output displays insolation values for an entire year on a three-dimensional graph. This tool…

  1. TEACHING AND TRAINING WITH MOTION PICTURES (MAGNETIC SOUND).

    ERIC Educational Resources Information Center

    Bell and Howell Co., Lincolnwood, IL.

    THE PREPARATION OF A MAGNETIC-SOUND TRACK FOR 16 MM. MOTION PICTURE FILMS IS DESCRIBED. IN SCRIPT PREPARATION, THE SCRIPT SHOULD BE WRITTEN IN NARRATIVE FORM TO INCLUDE ALL SHOTS NEEDED AND TO SUPPLEMENT AND GIVE INFORMATION NOT IN THE FILM. LANGUAGE SHOULD BE KEPT SIMPLE, AND UNAVOIDABLE TECHNICAL TERMS SHOULD BE EXPLAINED. IN REWRITING THE…

  2. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Client-Server Connection Status Monitoring Using Ajax Push Technology

    NASA Technical Reports Server (NTRS)

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  4. Menu-driven cloud computing and resource sharing for R and Bioconductor

    PubMed Central

    Bolouri, Hamid; Angerman, Michael

    2011-01-01

    Summary: We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. Availability and Implementation: CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. Contact: hbolouri@fhcrc.org PMID:21685055

  5. Automated large-scale file preparation, docking, and scoring: evaluation of ITScore and STScore using the 2012 Community Structure-Activity Resource benchmark.

    PubMed

    Grinter, Sam Z; Yan, Chengfei; Huang, Sheng-You; Jiang, Lin; Zou, Xiaoqin

    2013-08-26

    In this study, we use the recently released 2012 Community Structure-Activity Resource (CSAR) data set to evaluate two knowledge-based scoring functions, ITScore and STScore, and a simple force-field-based potential (VDWScore). The CSAR data set contains 757 compounds, most with known affinities, and 57 crystal structures. With the help of the script files for docking preparation, we use the full CSAR data set to evaluate the performances of the scoring functions on binding affinity prediction and active/inactive compound discrimination. The CSAR subset that includes crystal structures is used as well, to evaluate the performances of the scoring functions on binding mode and affinity predictions. Within this structure subset, we investigate the importance of accurate ligand and protein conformational sampling and find that the binding affinity predictions are less sensitive to non-native ligand and protein conformations than the binding mode predictions. We also find the full CSAR data set to be more challenging in making binding mode predictions than the subset with structures. The script files used for preparing the CSAR data set for docking, including scripts for canonicalization of the ligand atoms, are offered freely to the academic community.

  6. Applying open source data visualization tools to standard based medical data.

    PubMed

    Kopanitsa, Georgy; Taranik, Maxim

    2014-01-01

    Presentation of medical data in personal health records (PHRs) requires flexible platform independent tools to ensure easy access to the information. Different backgrounds of the patients, especially elder people require simple graphical presentation of the data. Data in PHRs can be collected from heterogeneous sources. Application of standard based medical data allows development of generic visualization methods. Focusing on the deployment of Open Source Tools, in this paper we applied Java Script libraries to create data presentations for standard based medical data.

  7. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less

  8. Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits

    NASA Astrophysics Data System (ADS)

    Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng

    2017-11-01

    We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.

  9. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  10. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.

  11. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization.

    PubMed

    Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.

  12. Functional Two-Dimensional Coordination Polymeric Layer as a Charge Barrier in Li-S Batteries.

    PubMed

    Huang, Jing-Kai; Li, Mengliu; Wan, Yi; Dey, Sukumar; Ostwal, Mayur; Zhang, Daliang; Yang, Chih-Wen; Su, Chun-Jen; Jeng, U-Ser; Ming, Jun; Amassian, Aram; Lai, Zhiping; Han, Yu; Li, Sean; Li, Lain-Jong

    2018-01-23

    Ultrathin two-dimensional (2D) polymeric layers are capable of separating gases and molecules based on the reported size exclusion mechanism. What is equally important but missing today is an exploration of the 2D layers with charge functionality, which enables applications using the charge exclusion principle. This work demonstrates a simple and scalable method of synthesizing a free-standing 2D coordination polymer Zn 2 (benzimidazolate) 2 (OH) 2 at the air-water interface. The hydroxyl (-OH) groups are stoichiometrically coordinated and implement electrostatic charges in the 2D structures, providing powerful functionality as a charge barrier. Electrochemical performance of the Li-S battery shows that the Zn 2 (benzimidazolate) 2 (OH) 2 coordination polymer layers efficiently mitigate the polysulfide shuttling effects and largely enhance the battery capacity and cycle performance. The synthesis of the proposed coordination polymeric layers is simple, scalable, cost saving, and promising for practical use in batteries.

  13. Server-Side JavaScript Debugging: Viewing the Contents of an Object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, J.; Simons, R.

    1999-04-21

    JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less

  14. Scalable Production of Graphene-Based Wearable E-Textiles

    PubMed Central

    2017-01-01

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors. PMID:29185706

  15. BigDataScript: a scripting language for data pipelines.

    PubMed

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.

  16. BigDataScript: a scripting language for data pipelines

    PubMed Central

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778

  17. mpiGraph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Adam

    2007-05-22

    MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.

  18. ChemCalc: a building block for tomorrow's chemical infrastructure.

    PubMed

    Patiny, Luc; Borel, Alain

    2013-05-24

    Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

  19. The Latent Structure of Secure Base Script Knowledge

    ERIC Educational Resources Information Center

    Waters, Theodore E. A.; Fraley, R. Chris; Groh, Ashley M.; Steele, Ryan D.; Vaughn, Brian E.; Bost, Kelly K.; Veríssimo, Manuela; Coppola, Gabrielle; Roisman, Glenn I.

    2015-01-01

    There is increasing evidence that attachment representations abstracted from childhood experiences with primary caregivers are organized as a cognitive script describing secure base use and support (i.e., the "secure base script"). To date, however, the latent structure of secure base script knowledge has gone unexamined--this despite…

  20. A hierarchical SVG image abstraction layer for medical imaging

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer

    2010-03-01

    As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

  1. Scripting MODFLOW Model Development Using Python and FloPy.

    PubMed

    Bakker, M; Post, V; Langevin, C D; Hughes, J D; White, J T; Starn, J J; Fienen, M N

    2016-09-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy. © 2016, National Ground Water Association.

  2. Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript

    DTIC Science & Technology

    2014-09-01

    scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of

  3. Python as a federation tool for GENESIS 3.0.

    PubMed

    Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.

  4. Python as a Federation Tool for GENESIS 3.0

    PubMed Central

    Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101

  5. Secure web-based access to radiology: forms and databases for fast queries

    NASA Astrophysics Data System (ADS)

    McColl, Roderick W.; Lane, Thomas J.

    2002-05-01

    Currently, Web-based access to mini-PACS or similar databases commonly utilizes either JavaScript, Java applets or ActiveX controls. Many sites do not permit applets or controls or other binary objects for fear of viruses or worms sent by malicious users. In addition, the typical CGI query mechanism requires several parameters to be sent with the http GET/POST request, which may identify the patient in some way; this in unacceptable for privacy protection. Also unacceptable are pages produced by server-side scripts which can be cached by the browser, since these may also contain sensitive information. We propose a simple mechanism for access to patient information, including images, which guarantees security of information, makes it impossible to bookmark the page, or to return to the page after some defined length of time. In addition, this mechanism is simple, therefore permitting rapid access without the need to initially download an interface such as an applet or control. In addition to image display, the design of the site allows the user to view and save movies of multi-phasic data, or to construct multi-frame datasets from entire series. These capabilities make the site attractive for research purposes such as teaching file preparation.

  6. Conversion of the agent-oriented domain-specific language ALAS into JavaScript

    NASA Astrophysics Data System (ADS)

    Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana

    2016-06-01

    This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.

  7. [Ecologic evaluation in the cognitive assessment of brain injury patients: generation and execution of script].

    PubMed

    Baguena, N; Thomas-Antérion, C; Sciessere, K; Truche, A; Extier, C; Guyot, E; Paris, N

    2006-06-01

    Assessment of executive functions in an everyday life activity, evaluating brain injury subjects with script generation and execution tasks. We compared a script generation task to a script execution task, whereby subjects had to make a cooked dish. Two grids were used for the quotation, qualitative and quantitative, as well as the calculation of an anosognosis score. We checked whether the execution task was more sensitive to a dysexecutive disorder than the script generation task and compared the scores obtained in this evaluation with those from classical frontal tests. Twelve subjects with brain injury 6 years+/-4.79 ago and 12 healthy control subjects were tested. The subjects carried out a script generation task whereby they had to explain the necessary stages to make a chocolate cake. They also had to do a script execution task corresponding to the cake making. The 2 quotation grids were operational and complementary. The quantitative grid is more sensitive to a dysexecutive disorder. The brain injury subjects made more errors in the execution task. It is important to evaluate the executive functions of subjects with brain injury in everyday life tasks, not just in psychometric or script-generation tests. Indeed the ecological realization of a very simple task can reveal executive function difficulties such as the planning or the sequencing of actions, which are under-evaluated in laboratory tests.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C

    Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first exportmore » DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.« less

  9. Decision Support Systems for Launch and Range Operations Using Jess

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar

    2007-01-01

    The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.

  10. It's in the Bag: Digital Backpacks for Project-Based Learning

    ERIC Educational Resources Information Center

    Basham, James D.; Perry, Ernest; Meyer, Helen

    2011-01-01

    When it comes to technology, many schools know what they want. They want targeted and scalable solutions that enhance learning and meet the NETS.S. And the teachers in those schools want simple, strategic instructional frameworks for developing their students' basic and digital age skills while meeting diverse learning needs. But as many…

  11. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  12. Crossing boundaries in interprofessional education: A call for instructional integration of two script concepts.

    PubMed

    Kiesewetter, Jan; Kollar, Ingo; Fernandez, Nicolas; Lubarsky, Stuart; Kiessling, Claudia; Fischer, Martin R; Charlin, Bernard

    2016-09-01

    Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: "illness scripts" and "internal/external collaboration scripts". Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. "Internal collaboration scripts" refer to an individual's knowledge about how to interact with others in a social situation. "External collaboration scripts" are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with "external collaboration scripts". Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.

  13. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization

    PubMed Central

    Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515

  14. Recent Developments in OVERGRID, OVERFLOW-2 and Chimera Grid Tools Scripts

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    OVERGRID and OVERFLOW-2 feature easy to use multiple-body dynamics. The new features of OVERGRID include a preliminary chemistry interface, standard atmosphere and mass properties calculators, a simple unsteady solution viewer, and a debris tracking interface. Script library development in Chimera Grid Tools has applications in turbopump grid generation. This viewgraph presentation profiles multiple component dynamics, validation test cases for a sphere, cylinder, and oscillating airfoil, and debris analysis.

  15. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  16. A Browser-Based Multi-User Working Environment for Physicists

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.

    2014-06-01

    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.

  17. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system.

    PubMed

    Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola

    2014-02-10

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.

  18. High-accuracy self-mixing interferometer based on multiple reflections using a simple external reflecting mirror

    NASA Astrophysics Data System (ADS)

    Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai

    2018-05-01

    A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.

  19. Graphene-based absorber exploiting guided mode resonances in one-dimensional gratings.

    PubMed

    Grande, M; Vincenti, M A; Stomeo, T; Bianco, G V; de Ceglia, D; Aközbek, N; Petruzzelli, V; Bruno, G; De Vittorio, M; Scalora, M; D'Orazio, A

    2014-12-15

    A one-dimensional dielectric grating, based on a simple geometry, is proposed and investigated to enhance light absorption in a monolayer graphene exploiting guided mode resonances. Numerical findings reveal that the optimized configuration is able to absorb up to 60% of the impinging light at normal incidence for both TE and TM polarizations resulting in a theoretical enhancement factor of about 26 with respect to the monolayer graphene absorption (≈2.3%). Experimental results confirm this behavior showing CVD graphene absorbance peaks up to about 40% over narrow bands of a few nanometers. The simple and flexible design points to a way to realize innovative, scalable and easy-to-fabricate graphene-based optical absorbers.

  20. Multicopter control with Navio using REX control system

    NASA Astrophysics Data System (ADS)

    Golembiovsky, Matej; Dedek, Jan; Ozana, Stepan

    2017-06-01

    This article deals with study of possible connection of the REXcontrols platform with Raspberry Pi based control system and Navio2 expansion board. This board is designed for development of autonomous robotic platforms type car, plane or multicopter. In this article, control system REXcontrols is introduced and its integration possibilities for control board Navio2 are discussed. The main discussed aspects are communication possibilities of the REXcontrols system with external scripts which further on allow control of this board. The main reasons for this undertaking are vast possibilities of archiving, visualization, signal processing and control which REXcontrols system allows. The control itself of the navio2 board is done through numerous interfaces. Specifically it is a pair of SPI data buses, an I2C data bus, UART and multiple GPIO pins. However, since REXcontrols control system has only limited access to these data buses, it is necessary to establish the communication through external scripts. For this purpose REXcontrols is equipped with mechanisms; SILO, EPC and REXLANG which are described in the article. Due to its simple implementation into REXcontrols and the option to utilize available libraries for communication with Navio2 board in external script, an EPC block was selected for the final implementation.

  1. Secure base representations in middle childhood across two Western cultures: Associations with parental attachment representations and maternal reports of behavior problems.

    PubMed

    Waters, Theodore E A; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S

    2015-08-01

    Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present 2 studies and provide 3 critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from 2 western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. (c) 2015 APA, all rights reserved).

  2. Secure Base Representations in Middle Childhood Across Two Western Cultures: Associations with Parental Attachment Representations and Maternal Reports of Behavior Problems

    PubMed Central

    Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.

    2015-01-01

    Recent work examining the content and organization of attachment representations suggests that one way in which we represent the attachment relationship is in the form of a cognitive script. That said, this work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in the middle childhood period. We present two studies and provide three critical pieces of evidence regarding the presence of a script-like representation of the attachment relationship in middle childhood. We present evidence that a middle childhood attachment script assessment tapped a stable underlying script using samples drawn from two western cultures, the United States (Study 1) and Belgium (Study 2). We also found evidence suggestive of the intergenerational transmission of secure base script knowledge (Study 1) and relations between secure base script knowledge and symptoms of psychopathology in middle childhood (Study 2). The results from this investigation represent an important downward extension of the secure base script construct. PMID:26147774

  3. Catch the A-Train from the NASA GIBS/Worldview Platform

    NASA Astrophysics Data System (ADS)

    Schmaltz, J. E.; Alarcon, C.; Baynes, K.; Boller, R. A.; Cechini, M. F.; De Cesare, C.; De Luca, A. P.; Gunnoe, T.; King, B. A.; King, J.; Pressley, N. N.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Wong, M. M.

    2016-12-01

    The satellites and instruments of the Afternoon Train are providing an unprecedented combination of nearly simultaneous measurements. One of the challenges for researchers and applications users is to sift through these combinations to find particular sets of data that correspond to their interests. Using visualization of the data is one way to explore these combinations. NASA's Worldview tool is designed to do just that - to interactively browse full-resolution satellite imagery. Worldview (https://worldview.earthdata.nasa.gov/) is web-based and developed using open libraries and standards (OpenLayers, JavaScript, CSS, HTML) for cross-platform compatibility. It addresses growing user demands for access to full-resolution imagery by providing a responsive, interactive interface with global coverage and no artificial boundaries. In addition to science data imagery, Worldview provides ancillary datasets such as coastlines and borders, socio-economic layers, and satellite orbit tracks. Worldview interacts with the Earthdata Search Client to provide download of the data files associated with the imagery being viewed. The imagery used by Worldview is provided NASA's Global Imagery Browse Services (GIBS - https://earthdata.nasa.gov/gibs) which provide highly responsive, highly scalable imagery services. Requests are made via the OGC Web Map Tile Service (WMTS) standard. In addition to Worldview, other clients can be developed using a variety of web-based libraries, desktop and mobile app libraries, and GDAL script-based access. GIBS currently includes more than 106 science data sets from seven instruments aboard three of the A-Train satellites and new data sets are being added as part of the President's Big Earth Data Initiative (BEDI). Efforts are underway to include new imagery types, such as vectors and curtains, into Worldview/GIBS which will be used to visualize additional A-Train science parameters.

  4. Decision-Making in Pediatric Transport Team Dispatch Using Script Concordance Testing.

    PubMed

    Rajapreyar, Prakadeshwari; Marcdante, Karen; Zhang, Liyun; Simpson, Pippa; Meyer, Michael T

    2017-11-01

    Our objective was to compare decision-making in dispatching pediatric transport teams by Medical Directors of pediatric transport teams (serving as experts) to that of Pediatric Intensivists and Critical Care fellows who often serve as Medical Control physicians. Understanding decision-making around team composition and dispatch could impact clinical management, cost effectiveness, and educational needs. Survey was developed using Script Concordance Testing guidelines. The survey contained 15 transport case vignettes covering 20 scenarios (45 questions). Eleven scenarios assessed impact of intrinsic patient factors (e.g., procedural needs), whereas nine assessed extrinsic factors (e.g., weather). Pediatric Critical Care programs accredited by the Accreditation Council for Graduate Medical Education (the United States). Pediatric Intensivists and senior Critical Care fellows at Pediatric Critical Care programs were the target population with Transport Medical Directors serving as the expert panel. None. Survey results were scored per Script Concordance Testing guidelines. Concordance within groups was assessed using simple percentage agreement. There was little concordance in decision-making by Transport Medical Directors (median Script Concordance Testing percentage score [interquartile range] of 33.9 [30.4-37.3]). In addition, there was no statistically significant difference between the median Script Concordance Testing scores among the senior fellows and Pediatric Intensivists (31.1 [29.6-33.2] vs 29.7 [28.3-32.3], respectively; p = 0.12). Transport Medical Directors were more concordant on reasoning involving intrinsic patient factors rather than extrinsic factors (10/21 vs 4/24). Our study demonstrates pediatric transport team dispatch decision-making discordance by pediatric critical care physicians of varying levels of expertise and experience. Script Concordance Testing at a local level may better elucidate standards in medical decision-making within pediatric critical care physicians. The development of a curriculum, which provides education and trains our workforce on the logistics of pediatric transport team dispatch, would help standardize practice and evaluate outcomes based on decision-making.

  5. A Simple Molecular Dynamics Lab to Calculate Viscosity as a Function of Temperature

    ERIC Educational Resources Information Center

    Eckler, Logan H.; Nee, Matthew J.

    2016-01-01

    A simple molecular dynamics experiment is described to demonstrate transport properties for the undergraduate physical chemistry laboratory. The AMBER package is used to monitor self-diffusion in "n"-hexane. Scripts (available in the Supporting Information) make the process considerably easier for students, allowing them to focus on the…

  6. Kip, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staley, Martin

    2017-09-20

    This high-performance ray tracing library provides very fast rendering; compact code; type flexibility through C++ "generic programming" techniques; and ease of use via an application programming interface (API) that operates independently of any GUI, on-screen display, or other enclosing application. Kip supports constructive solid geometry (CSG) models based on a wide variety of built-in shapes and logical operators, and also allows for user-defined shapes and operators to be provided. Additional features include basic texturing; input/output of models using a simple human-readable file format and with full error checking and detailed diagnostics; and support for shared data parallelism. Kip is writtenmore » in pure, ANSI standard C++; is entirely platform independent; and is very easy to use. As a C++ "header only" library, it requires no build system, configuration or installation scripts, wizards, non-C++ preprocessing, makefiles, shell scripts, or external libraries.« less

  7. Scalable alignment and transfer of nanowires based on oriented polymer nanofibers.

    PubMed

    Yan, Shancheng; Lu, Lanxin; Meng, Hao; Huang, Ningping; Xiao, Zhongdang

    2010-03-05

    We develop a simple and scalable method based on oriented polymer nanofiber films for the parallel assembly and transfer of nanowires at high density. Nanowires dispersed in solution are aligned and selectively deposited at the central space of parallel nanochannels formed by the well-oriented nanofibers as a result of evaporation-induced flow and capillarity. A general contact printing method is used to realize the transfer of the nanowires from the donor nanofiber film to a receiver substrate. The mechanism, which involves ordered alignment of nanowires on oriented polymer nanofiber films, is also explored with an evaporation model of cylindrical droplets. The simplicity of the assembly and transfer, and the facile fabrication of large-area well-oriented nanofiber films, make the present method promising for the application of nanowires, especially for the disordered nanowires synthesized by solution chemistry.

  8. Origins of Secure Base Script Knowledge and the Developmental Construction of Attachment Representations

    PubMed Central

    Waters, Theodore E. A.; Ruiz, Sarah K.; Roisman, Glenn I.

    2016-01-01

    Increasing evidence suggests that attachment representations take at least two forms—a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample, and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Further, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. PMID:27302650

  9. CH5M3D: an HTML5 program for creating 3D molecular structures.

    PubMed

    Earley, Clarke W

    2013-11-18

    While a number of programs and web-based applications are available for the interactive display of 3-dimensional molecular structures, few of these provide the ability to edit these structures. For this reason, we have developed a library written in JavaScript to allow for the simple creation of web-based applications that should run on any browser capable of rendering HTML5 web pages. While our primary interest in developing this application was for educational use, it may also prove useful to researchers who want a light-weight application for viewing and editing small molecular structures. Molecular compounds are drawn on the HTML5 Canvas element, with the JavaScript code making use of standard techniques to allow display of three-dimensional structures on a two-dimensional canvas. Information about the structure (bond lengths, bond angles, and dihedral angles) can be obtained using a mouse or other pointing device. Both atoms and bonds can be added or deleted, and rotation about bonds is allowed. Routines are provided to read structures either from the web server or from the user's computer, and creation of galleries of structures can be accomplished with only a few lines of code. Documentation and examples are provided to demonstrate how users can access all of the molecular information for creation of web pages with more advanced features. A light-weight (≈ 75 kb) JavaScript library has been made available that allows for the simple creation of web pages containing interactive 3-dimensional molecular structures. Although this library is designed to create web pages, a web server is not required. Installation on a web server is straightforward and does not require any server-side modules or special permissions. The ch5m3d.js library has been released under the GNU GPL version 3 open-source license and is available from http://sourceforge.net/projects/ch5m3d/.

  10. CH5M3D: an HTML5 program for creating 3D molecular structures

    PubMed Central

    2013-01-01

    Background While a number of programs and web-based applications are available for the interactive display of 3-dimensional molecular structures, few of these provide the ability to edit these structures. For this reason, we have developed a library written in JavaScript to allow for the simple creation of web-based applications that should run on any browser capable of rendering HTML5 web pages. While our primary interest in developing this application was for educational use, it may also prove useful to researchers who want a light-weight application for viewing and editing small molecular structures. Results Molecular compounds are drawn on the HTML5 Canvas element, with the JavaScript code making use of standard techniques to allow display of three-dimensional structures on a two-dimensional canvas. Information about the structure (bond lengths, bond angles, and dihedral angles) can be obtained using a mouse or other pointing device. Both atoms and bonds can be added or deleted, and rotation about bonds is allowed. Routines are provided to read structures either from the web server or from the user’s computer, and creation of galleries of structures can be accomplished with only a few lines of code. Documentation and examples are provided to demonstrate how users can access all of the molecular information for creation of web pages with more advanced features. Conclusions A light-weight (≈ 75 kb) JavaScript library has been made available that allows for the simple creation of web pages containing interactive 3-dimensional molecular structures. Although this library is designed to create web pages, a web server is not required. Installation on a web server is straightforward and does not require any server-side modules or special permissions. The ch5m3d.js library has been released under the GNU GPL version 3 open-source license and is available from http://sourceforge.net/projects/ch5m3d/. PMID:24246004

  11. Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Glasser, Alan H.

    2008-11-01

    Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.

  12. Multiple Domains of Parental Secure Base Support During Childhood and Adolescence Contribute to Adolescents’ Representations of Attachment as a Secure Base Script

    PubMed Central

    Vaughn, Brian E.; Waters, Theodore E. A.; Steele, Ryan D.; Roisman, Glenn I.; Bost, Kelly K.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn

    2016-01-01

    Although attachment theory claims that early attachment representations reflecting the quality of the child’s “lived experiences” are maintained across developmental transitions, evidence that has emerged over the last decade suggests that the association between early relationship quality and adolescents’ attachment representations is fairly modest in magnitude. We used aspects of parenting beyond sensitivity over childhood and adolescence and early security to predict adolescents’ scripted attachment representations. At age 18 years, 673 participants from the NICHD Study of Early Child Care and Youth Development (SECCYD) completed the Attachment Script Assessment (ASA) from which we derived an assessment of secure base script knowledge. Measures of secure base support from childhood through age 15 years (e.g., parental monitoring of child activity, father presence in the home) were selected as predictors and accounted for an additional 8% of the variance in secure base script knowledge scores above and beyond direct observations of sensitivity and early attachment status alone, suggesting that adolescents’ scripted attachment representations reflect multiple domains of parenting. Cognitive and demographic variables also significantly increased predicted variance in secure base script knowledge by 2% each. PMID:27032953

  13. A Markov model of the Indus script

    PubMed Central

    Rao, Rajesh P. N.; Yadav, Nisha; Vahia, Mayank N.; Joglekar, Hrishikesh; Adhikari, R.; Mahadevan, Iravatham

    2009-01-01

    Although no historical information exists about the Indus civilization (flourished ca. 2600–1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system. PMID:19666571

  14. The 2-d CCD Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Privett, G. J.; Taylor, M. B.

    This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.

  15. Review of Software Platforms for Agent Based Models

    DTIC Science & Technology

    2008-04-01

    EINSTein 4.3.2 Battlefield Python (optional, for batch runs) MANA 4.3.3 Battlefield N/A MASON 4.3.4 General Java NetLogo 4.3.5 General Logo-variant...through the use of relatively simple Python scripts. It also has built-in functions for parameter sweeps, and can plot the resulting fitness landscape ac...Nonetheless its ease of use, and support for automatic drawing of agents in 2D or 3D2 makes this a suitable platform for beginner programmers. 2Only in the

  16. Optimizing R with SparkR on a commodity cluster for biomedical research.

    PubMed

    Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan

    2016-12-01

    Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  17. Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2012-01-01

    A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.

  18. First results from simulations of supersymmetric lattices

    NASA Astrophysics Data System (ADS)

    Catterall, Simon

    2009-01-01

    We conduct the first numerical simulations of lattice theories with exact supersymmetry arising from the orbifold constructions of \\cite{Cohen:2003xe,Cohen:2003qw,Kaplan:2005ta}. We consider the Script Q = 4 theory in D = 0,2 dimensions and the Script Q = 16 theory in D = 0,2,4 dimensions. We show that the U(N) theories do not possess vacua which are stable non-perturbatively, but that this problem can be circumvented after truncation to SU(N). We measure the distribution of scalar field eigenvalues, the spectrum of the fermion operator and the phase of the Pfaffian arising after integration over the fermions. We monitor supersymmetry breaking effects by measuring a simple Ward identity. Our results indicate that simulations of Script N = 4 super Yang-Mills may be achievable in the near future.

  19. Ultrathin nanoporous membranes for insulator-based dielectrophoresis

    NASA Astrophysics Data System (ADS)

    Mukaibo, Hitomi; Wang, Tonghui; Perez-Gonzalez, Victor H.; Getpreecharsawas, Jirachai; Wurzer, Jack; Lapizco-Encinas, Blanca H.; McGrath, James L.

    2018-06-01

    Insulator-based dielectrophoresis (iDEP) is a simple, scalable mechanism that can be used for directly manipulating particle trajectories in pore-based filtration and separation processes. However, iDEP manipulation of nanoparticles presents unique challenges as the dielectrophoretic force ({F}{{D}{{E}}{{P}}}) exerted on the nanoparticles can easily be overshadowed by opposing kinetic forces. In this study, a molecularly thin, SiN-based nanoporous membrane (NPN) is explored as a breakthrough technology that enhances {F}{{D}{{E}}{{P}}}. By numerically assessing the gradient of the electric field square ({{\

  20. Three basic principles of success.

    PubMed

    Levin, Roger

    2003-06-01

    Basic business principles all but ensure success when they are followed consistently. Putting strategies, objectives and tactics in place is the first step toward being able to document systems, initiate scripting and improve staff training. Without the basic steps, systems, scripting and training the practice for performance would be hit or miss, at best. More importantly, applying business principles ensures that limited practice resources are dedicated to the achievement of the strategy. By following this simple, three-step process, a dental practice can significantly enhance both financial success and dentist and staff satisfaction.

  1. ZeBase: an open-source relational database for zebrafish laboratories.

    PubMed

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  2. Origins of Secure Base Script Knowledge and the Developmental Construction of Attachment Representations.

    PubMed

    Waters, Theodore E A; Ruiz, Sarah K; Roisman, Glenn I

    2017-01-01

    Increasing evidence suggests that attachment representations take at least two forms: a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Furthermore, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  3. ERPLAB: an open-source toolbox for the analysis of event-related potentials

    PubMed Central

    Lopez-Calderon, Javier; Luck, Steven J.

    2014-01-01

    ERPLAB toolbox is a freely available, open-source toolbox for processing and analyzing event-related potential (ERP) data in the MATLAB environment. ERPLAB is closely integrated with EEGLAB, a popular open-source toolbox that provides many EEG preprocessing steps and an excellent user interface design. ERPLAB adds to EEGLAB’s EEG processing functions, providing additional tools for filtering, artifact detection, re-referencing, and sorting of events, among others. ERPLAB also provides robust tools for averaging EEG segments together to create averaged ERPs, for creating difference waves and other recombinations of ERP waveforms through algebraic expressions, for filtering and re-referencing the averaged ERPs, for plotting ERP waveforms and scalp maps, and for quantifying several types of amplitudes and latencies. ERPLAB’s tools can be accessed either from an easy-to-learn graphical user interface or from MATLAB scripts, and a command history function makes it easy for users with no programming experience to write scripts. Consequently, ERPLAB provides both ease of use and virtually unlimited power and flexibility, making it appropriate for the analysis of both simple and complex ERP experiments. Several forms of documentation are available, including a detailed user’s guide, a step-by-step tutorial, a scripting guide, and a set of video-based demonstrations. PMID:24782741

  4. Observing proposals on the Web at the National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Pilachowski, Catherine A.; Barnes, Jeannette; Bell, David J.

    1998-07-01

    Proposals for telescope time at facilities available through the National Optical Astronomy Observatories can now be prepared and submitted via the WWW. Investigators submit proposal information through a series of HTML forms to the NOAO server, where the information is processed by Perl CGI scripts. PostScript figures and ASCII files may be attached by investigators for inclusion in their proposals using their browser's upload feature. Proposal information is saved on the server so that investigators can return in later sessions to continue work on a proposal and so that collaborators can participate in writing the proposal if they have access to the proposal account name and password. The system provides on-line verification of LATEX syntax and a spellchecker, and confirms that all sections of the proposal are filled out. Users can request a LATEX or PostScript copy of their proposal by e-mail, or view the proposal on line. The advantages of the Web-based process for our users are convenience, access to on-line documentation, and the simple interface which avoids direct confrontation with LATEX. From the NOAO point of view, the advantage is the use of standardized formats and syntax, particularly as we begin to receive proposals for the Gemini telescopes and some independent observatories.

  5. ERPLAB: an open-source toolbox for the analysis of event-related potentials.

    PubMed

    Lopez-Calderon, Javier; Luck, Steven J

    2014-01-01

    ERPLAB toolbox is a freely available, open-source toolbox for processing and analyzing event-related potential (ERP) data in the MATLAB environment. ERPLAB is closely integrated with EEGLAB, a popular open-source toolbox that provides many EEG preprocessing steps and an excellent user interface design. ERPLAB adds to EEGLAB's EEG processing functions, providing additional tools for filtering, artifact detection, re-referencing, and sorting of events, among others. ERPLAB also provides robust tools for averaging EEG segments together to create averaged ERPs, for creating difference waves and other recombinations of ERP waveforms through algebraic expressions, for filtering and re-referencing the averaged ERPs, for plotting ERP waveforms and scalp maps, and for quantifying several types of amplitudes and latencies. ERPLAB's tools can be accessed either from an easy-to-learn graphical user interface or from MATLAB scripts, and a command history function makes it easy for users with no programming experience to write scripts. Consequently, ERPLAB provides both ease of use and virtually unlimited power and flexibility, making it appropriate for the analysis of both simple and complex ERP experiments. Several forms of documentation are available, including a detailed user's guide, a step-by-step tutorial, a scripting guide, and a set of video-based demonstrations.

  6. Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0

    DOE PAGES

    Huck, Kevin A.; Malony, Allen D.; Shende, Sameer; ...

    2008-01-01

    The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis ofmore » individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.« less

  7. Scalability Issues for Remote Sensing Infrastructure: A Case Study.

    PubMed

    Liu, Yang; Picard, Sean; Williamson, Carey

    2017-04-29

    For the past decade, a team of University of Calgary researchers has operated a large "sensor Web" to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system's memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  8. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    PubMed Central

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-01-01

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121

  9. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    PubMed

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  10. PsyScript: a Macintosh application for scripting experiments.

    PubMed

    Bates, Timothy C; D'Oliveiro, Lawrence

    2003-11-01

    PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.

  11. Writing for the Tube.

    ERIC Educational Resources Information Center

    Lin, Sam Chu

    1989-01-01

    Addresses the differences between reporting for print and reporting for television news. Suggests that television journalists must use a simple, conversational style, while print journalists must be more descriptive. Offers suggestions for taping interviews and writing news scripts. (LS)

  12. Secure Base Scripts are Associated with Maternal Parenting Behavior across Contexts and Reflective Functioning among Trauma-Exposed Mothers

    PubMed Central

    Huth-Bocks, Alissa C.; Muzik, Maria; Beeghly, Marjorie; Earls, Lauren; Stacks, Ann M.

    2015-01-01

    There is growing evidence that ‘secure-base scripts’ (Waters & Waters, 2006) are an important part of the cognitive underpinnings of internal working models of attachment. Recent research in middle class samples has shown that secure-base scripts are linked to maternal attachment-oriented behavior and child outcomes. However, little is known about the correlates of secure base scripts in higher-risk samples. Participants in the current study included 115 mothers who were oversampled for childhood maltreatment and their infants. Results revealed that a higher level of secure base scriptedness was significantly related to more positive and less negative maternal parenting in both unstructured free play and structured teaching contexts, and to higher reflective functioning scores on the Parent Development Interview-Revised Short Form (Slade, Aber, Berger, Bresgi, & Kaplan, 2003). Associations with parent-child secure base scripts, specifically, indicate some level of relationship-specificity in attachment scripts. Many, but not all, significant associations remained after controlling for family income and maternal age. Findings suggest that assessing secure base scripts among mothers known to be at risk for parenting difficulties may be important for interventions aimed at altering problematic parental representations and caregiving behavior. PMID:25319230

  13. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    PubMed

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Internal and External Scripts in Computer-Supported Collaborative Inquiry Learning

    ERIC Educational Resources Information Center

    Kollar, Ingo; Fischer, Frank; Slotta, James D.

    2007-01-01

    We investigated how differently structured external scripts interact with learners' internal scripts with respect to individual knowledge acquisition in a Web-based collaborative inquiry learning environment. Ninety students from two secondary schools participated. Two versions of an external collaboration script (high vs. low structured)…

  15. Privacy-Aware Location Database Service for Granular Queries

    NASA Astrophysics Data System (ADS)

    Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide

    Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.

  16. Scalable alignment and transfer of nanowires in a Spinning Langmuir Film.

    PubMed

    Zhu, Ren; Lai, Yicong; Nguyen, Vu; Yang, Rusen

    2014-10-21

    Many nanomaterial-based integrated nanosystems require the assembly of nanowires and nanotubes into ordered arrays. A generic alignment method should be simple and fast for the proof-of-concept study by a researcher, and low-cost and scalable for mass production in industries. Here we have developed a novel Spinning-Langmuir-Film technique to fulfill both requirements. We used surfactant-enhanced shear flow to align inorganic and organic nanowires, which could be easily transferred to other substrates and ready for device fabrication in less than 20 minutes. The aligned nanowire areal density can be controlled in a wide range from 16/mm(-2) to 258/mm(-2), through the compression of the film. The surface surfactant layer significantly influences the quality of alignment and has been investigated in detail.

  17. Development of a web application for water resources based on open source software

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  18. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  19. Piezoresistive Sensor with High Elasticity Based on 3D Hybrid Network of Sponge@CNTs@Ag NPs.

    PubMed

    Zhang, Hui; Liu, Nishuang; Shi, Yuling; Liu, Weijie; Yue, Yang; Wang, Siliang; Ma, Yanan; Wen, Li; Li, Luying; Long, Fei; Zou, Zhengguang; Gao, Yihua

    2016-08-31

    Pressure sensors with high elasticity are in great demand for the realization of intelligent sensing, but there is a need to develope a simple, inexpensive, and scalable method for the manufacture of the sensors. Here, we reported an efficient, simple, facile, and repeatable "dipping and coating" process to manufacture a piezoresistive sensor with high elasticity, based on homogeneous 3D hybrid network of carbon nanotubes@silver nanoparticles (CNTs@Ag NPs) anchored on a skeleton sponge. Highly elastic, sensitive, and wearable sensors are obtained using the porous structure of sponge and the synergy effect of CNTs/Ag NPs. Our sensor was also tested for over 2000 compression-release cycles, exhibiting excellent elasticity and cycling stability. Sensors with high performance and a simple fabrication process are promising devices for commercial production in various electronic devices, for example, sport performance monitoring and man-machine interfaces.

  20. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  1. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  2. Gender differences in performance of script analysis by older adults.

    PubMed

    Helmes, E; Bush, J D; Pike, D L; Drake, D G

    2006-12-01

    Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical scripts. Results were not significant but a tendency was present, with genders making fewer impossible errors on the gender-typical script. Such an interaction was also noted in Study 2, which contrasted 50 older with 50 younger adults on three scripts, including a script with neutral familiarity. The pattern of significant interactions for errors suggested the need to use scripts that are based upon tasks that are equally familiar to both genders.

  3. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  4. Bio-inspired silicon nanospikes fabricated by metal-assisted chemical etching for antibacterial surfaces

    NASA Astrophysics Data System (ADS)

    Hu, Huan; Siu, Vince S.; Gifford, Stacey M.; Kim, Sungcheol; Lu, Minhua; Meyer, Pablo; Stolovitzky, Gustavo A.

    2017-12-01

    The recently discovered bactericidal properties of nanostructures on wings of insects such as cicadas and dragonflies have inspired the development of similar nanostructured surfaces for antibacterial applications. Since most antibacterial applications require nanostructures covering a considerable amount of area, a practical fabrication method needs to be cost-effective and scalable. However, most reported nanofabrication methods require either expensive equipment or a high temperature process, limiting cost efficiency and scalability. Here, we report a simple, fast, low-cost, and scalable antibacterial surface nanofabrication methodology. Our method is based on metal-assisted chemical etching that only requires etching a single crystal silicon substrate in a mixture of silver nitrate and hydrofluoric acid for several minutes. We experimentally studied the effects of etching time on the morphology of the silicon nanospikes and the bactericidal properties of the resulting surface. We discovered that 6 minutes of etching results in a surface containing silicon nanospikes with optimal geometry. The bactericidal properties of the silicon nanospikes were supported by bacterial plating results, fluorescence images, and scanning electron microscopy images.

  5. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  6. JBrowse: a dynamic web platform for genome visualization and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buels, Robert; Yao, Eric; Diesh, Colin M.

    JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a maturemore » web application suitable for genome visualization and analysis.« less

  7. JBrowse: a dynamic web platform for genome visualization and analysis.

    PubMed

    Buels, Robert; Yao, Eric; Diesh, Colin M; Hayes, Richard D; Munoz-Torres, Monica; Helt, Gregg; Goodstein, David M; Elsik, Christine G; Lewis, Suzanna E; Stein, Lincoln; Holmes, Ian H

    2016-04-12

    JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a mature web application suitable for genome visualization and analysis.

  8. QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.

    PubMed

    O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong

    2016-05-04

    Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.

  9. Everware toolkit. Supporting reproducible science and challenge-driven education.

    NASA Astrophysics Data System (ADS)

    Ustyuzhanin, A.; Head, T.; Babuschkin, I.; Tiunov, A.

    2017-10-01

    Modern science clearly demands for a higher level of reproducibility and collaboration. To make research fully reproducible one has to take care of several aspects: research protocol description, data access, environment preservation, workflow pipeline, and analysis script preservation. Version control systems like git help with the workflow and analysis scripts part. Virtualization techniques like Docker or Vagrant can help deal with environments. Jupyter notebooks are a powerful platform for conducting research in a collaborative manner. We present project Everware that seamlessly integrates git repository management systems such as Github or Gitlab, Docker and Jupyter helping with a) sharing results of real research and b) boosts education activities. With the help of Everware one can not only share the final artifacts of research but all the depth of the research process. This been shown to be extremely helpful during organization of several data analysis hackathons and machine learning schools. Using Everware participants could start from an existing solution instead of starting from scratch. They could start contributing immediately. Everware allows its users to make use of their own computational resources to run the workflows they are interested in, which leads to higher scalability of the toolkit.

  10. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less

  11. The development of videos in culturally grounded drug prevention for rural native Hawaiian youth.

    PubMed

    Okamoto, Scott K; Helm, Susana; McClain, Latoya L; Dinson, Ay-Laina

    2012-12-01

    The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the National Institute on Drug Abuse. Seventy-four middle- and high-school-aged youth in 15 focus groups adapted and validated the details of the scripts to make them more realistic. Specifically, youth participants affirmed the situations described in the scripts and suggested changes to details of the scripts to make them more culturally specific. Suggested changes to the scripts also reflected preferred drug resistance strategies described in prior research, and varied based on the type of drug offerer described in each script (i.e., peer/friend, parent, or cousin/sibling). Implications for culturally grounded drug prevention are discussed.

  12. Trial-Based Functional Analysis Informs Treatment for Vocal Scripting.

    PubMed

    Rispoli, Mandy; Brodhead, Matthew; Wolfe, Katie; Gregori, Emily

    2018-05-01

    Research on trial-based functional analysis has primarily focused on socially maintained challenging behaviors. However, procedural modifications may be necessary to clarify ambiguous assessment results. The purposes of this study were to evaluate the utility of iterative modifications to trial-based functional analysis on the identification of putative reinforcement and subsequent treatment for vocal scripting. For all participants, modifications to the trial-based functional analysis identified a primary function of automatic reinforcement. The structure of the trial-based format led to identification of social attention as an abolishing operation for vocal scripting. A noncontingent attention treatment was evaluated using withdrawal designs for each participant. This noncontingent attention treatment resulted in near zero levels of vocal scripting for all participants. Implications for research and practice are presented.

  13. Arabic Script and the Rise of Arabic Calligraphy

    ERIC Educational Resources Information Center

    Alshahrani, Ali A.

    2008-01-01

    The aim of this paper is to present a concise coherent literature review of the Arabic Language script system as one of the oldest living Semitic languages in the world. The article discusses in depth firstly, Arabic script as a phonemic sound-based writing system of twenty eight, right to left cursive script where letterforms shaped by their…

  14. Scripted Collaboration and Group-Based Variations in a Higher Education CSCL Context

    ERIC Educational Resources Information Center

    Hamalainen, Raija; Arvaja, Maarit

    2009-01-01

    Scripting student activities is one way to make Computer-Supported Collaborative Learning more efficient. This case study examines how scripting guided student group activities and also how different groups interpreted the script; what kinds of roles students adopted and what kinds of differences there were between the groups in terms of their…

  15. An End-to-End System to Enable Quick, Easy and Inexpensive Deployment of Hydrometeorological Stations

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2014-12-01

    The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.

  16. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.

  17. Generating Stimuli for Neuroscience Using PsychoPy.

    PubMed

    Peirce, Jonathan W

    2008-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  18. Computer-Based Script Training for Aphasia: Emerging Themes from Post-Treatment Interviews

    ERIC Educational Resources Information Center

    Cherney, Leora R.; Halper, Anita S.; Kaye, Rosalind C.

    2011-01-01

    This study presents results of post-treatment interviews following computer-based script training for persons with chronic aphasia. Each of the 23 participants received 9 weeks of AphasiaScripts training. Post-treatment interviews were conducted with the person with aphasia and/or a significant other person. The 23 interviews yielded 584 coded…

  19. Increasing play-based commenting in children with autism spectrum disorder using a novel script-frame procedure.

    PubMed

    Groskreutz, Mark P; Peters, Amy; Groskreutz, Nicole C; Higbee, Thomas S

    2015-01-01

    Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel script-frame protocol and learned to make play-related comments about toys. After the script-frame protocol, commenting occurred in the absence of scripts, with untrained play activities, and included untrained comments. © Society for the Experimental Analysis of Behavior.

  20. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    NASA Astrophysics Data System (ADS)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem

  1. Evidence-Based Scripted Videos on Handling Student Misbehavior: The Development and Evaluation of Video Cases for Teacher Education

    ERIC Educational Resources Information Center

    Piwowar, Valentina; Barth, Victoria L.; Ophardt, Diemut; Thiel, Felicitas

    2018-01-01

    Scripted videos are based on a screenplay and are a viable and widely used tool for learning. Yet, reservations exist due to limited authenticity and high production costs. The present paper comprehensively describes a video production process for scripted videos on the topic of student misbehavior in the classroom. In a three step…

  2. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  3. Scalability Issues for Remote Sensing Infrastructure: A Case Study

    PubMed Central

    Liu, Yang; Picard, Sean; Williamson, Carey

    2017-01-01

    For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure. PMID:28468262

  4. LK Scripting Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The LK scripting language is a simple and fast computer programming language designed for easy integration with existing software to enable automation of tasks. The LK language is used by NREL’s System Advisor Model (SAM), the SAM Software Development Kit (SDK), and SolTrace products. LK is easy extensible and adaptable to new software due to its small footprint and is designed to be statically linked into other software. It is written in standard C++, is cross-platform (Windows, Linux, and OSX), and includes optional portions that enable direct integration with graphical user interfaces written in the open source C++ wxWidgets Versionmore » 3.0+ toolkit.« less

  5. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  6. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  7. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  8. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline can be drawn. The designed scripts are able to ensure for simple point clouds: the elimination of almost all noise points and the reconstruction of a CAD model.

  9. Towards a measurement of internalization of collaboration scripts in the medical context - results of a pilot study.

    PubMed

    Kiesewetter, Jan; Gluza, Martin; Holzer, Matthias; Saravo, Barbara; Hammitzsch, Laura; Fischer, Martin R

    2015-01-01

    Collaboration as a key qualification in medical education and everyday routine in clinical care can substantially contribute to improving patient safety. Internal collaboration scripts are conceptualized as organized - yet adaptive - knowledge that can be used in specific situations in professional everyday life. This study examines the level of internalization of collaboration scripts in medicine. Internalization is understood as fast retrieval of script information. The goals of the current study were the assessment of collaborative information, which is part of collaboration scripts, and the development of a methodology for measuring the level of internalization of collaboration scripts in medicine. For the contrastive comparison of internal collaboration scripts, 20 collaborative novices (medical students in their final year) and 20 collaborative experts (physicians with specialist degrees in internal medicine or anesthesiology) were included in the study. Eight typical medical collaborative situations as shown on a photo or video were presented to the participants for five seconds each. Afterwards, the participants were asked to describe what they saw on the photo or video. Based on the answers, the amount of information belonging to a collaboration script (script-information) was determined and the time each participant needed for answering was measured. In order to measure the level of internalization, script-information per recall time was calculated. As expected, collaborative experts stated significantly more script-information than collaborative novices. As well, collaborative experts showed a significantly higher level of internalization. Based on the findings of this research, we conclude that our instrument can discriminate between collaboration novices and experts. It therefore can be used to analyze measures to foster subject-specific competency in medical education.

  10. Types and Characteristics of Fish and Seafood Provisioning Scripts Used by Rural Midlife Adults.

    PubMed

    Bostic, Stephanie M; Sobal, Jeffery; Bisogni, Carole A; Monclova, Juliet M

    To examine rural New York State consumers' cognitive scripts for fish and seafood provisioning. A cross-sectional design with in-depth, semistructured interviews. Three rural New York State counties. Adults (n = 31) with diverse fish-related experiences were purposefully recruited. Scripts describing fish and seafood acquisition, preparation, and eating out. Interview transcripts were coded for emergent themes using Atlas.ti. Diagrams of scripts for each participant were constructed. Five types of acquisition scripts included quality-oriented, price-oriented, routine, special occasion, and fresh catch. Frequently used preparation scripts included everyday cooking, fast meal, entertaining, and grilling. Scripts for eating out included fish as first choice, Friday outing, convenient meals, special event, and travel meals. Personal values and resources influenced script development. Individuals drew on a repertoire of scripts based on their goals and resources at that time and in that place. Script characteristics of scope, flexibility, and complexity varied widely. Scripts incorporated goals, values, and resources into routine food behaviors. Understanding the characteristics of scripts provided insights about fish provisioning and opportunities to reduce the gap between current intake and dietary guidelines in this rural setting. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. JBrowse: A dynamic web platform for genome visualization and analysis

    DOE PAGES

    Buels, Robert; Yao, Eric; Diesh, Colin M.; ...

    2016-04-12

    Background: JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Results: Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. Conclusions: JBrowsemore » is a mature web application suitable for genome visualization and analysis.« less

  12. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  13. Leveraging Globus to Support Access and Delivery of Scientific Data

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2015-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2014, 11,000 unique users downloaded greater than 1.1 petabytes of data from the RDA, and customized data products were prepared for more than 45,000 user-driven requests. In order to further support this increase in web download usage, the RDA has implemented the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for the research community. This presentation will highlight the technical functionality, challenges, and usefulness of the Globus data transfer service for accessing the RDA data holdings.

  14. R suite for the Reduction and Analysis of UFO Orbit Data

    NASA Astrophysics Data System (ADS)

    Campbell-Burns, P.; Kacerek, R.

    2016-02-01

    This paper presents work undertaken by UKMON to compile a suite of simple R scripts for the reduction and analysis of meteor data. The application of R in this context is by no means an original idea and there is no doubt that it has been used already in many reports to the IMO. However, we are unaware of any common libraries or shared resources available to the meteor community. By sharing our work we hope to stimulate interest and discussion. Graphs shown in this paper are illustrative and are based on current data from both EDMOND and UKMON.

  15. A programmable rules engine to provide clinical decision support using HTML forms.

    PubMed

    Heusinkveld, J; Geissbuhler, A; Sheshelidze, D; Miller, R

    1999-01-01

    The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser.

  16. A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling.

    PubMed

    Verberne, Anthony J M

    2016-10-19

    Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure.

  17. Scalable lithography from Natural DNA Patterns via polyacrylamide gel

    NASA Astrophysics Data System (ADS)

    Qu, Jiehao; Hou, Xianliang; Fan, Wanchao; Xi, Guanghui; Diao, Hongyan; Liu, Xiangdon

    2015-12-01

    A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.

  18. QR Codes: Outlook for Food Science and Nutrition.

    PubMed

    Sanz-Valero, Javier; Álvarez Sabucedo, Luis M; Wanden-Berghe, Carmina; Santos Gago, Juan M

    2016-01-01

    QR codes opens up the possibility to develop simple-to-use, cost-effective-cost, and functional systems based on the optical recognition of inexpensive tags attached to physical objects. These systems, combined with Web platforms, can provide us with advanced services that are already currently broadly used on many contexts of the common life. Due to its philosophy, based on the automatic recognition of messages embedded on simple graphics by means of common devices such as mobile phones, QR codes are very convenient for the average user. Regretfully, its potential has not yet been fully exploited in the domains of food science and nutrition. This paper points out some applications to make the most of this technology for these domains in a straightforward manner. For its characteristics, we are addressing systems with low barriers to entry and high scalability for its deployment. Therefore, its launching among professional and final users is quite simple. The paper also provides high-level indications for the evaluation of the technological frame required to implement the identified possibilities of use.

  19. p3d--Python module for structural bioinformatics.

    PubMed

    Fufezan, Christian; Specht, Michael

    2009-08-21

    High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  20. TREE2FASTA: a flexible Perl script for batch extraction of FASTA sequences from exploratory phylogenetic trees.

    PubMed

    Sauvage, Thomas; Plouviez, Sophie; Schmidt, William E; Fredericq, Suzanne

    2018-03-05

    The body of DNA sequence data lacking taxonomically informative sequence headers is rapidly growing in user and public databases (e.g. sequences lacking identification and contaminants). In the context of systematics studies, sorting such sequence data for taxonomic curation and/or molecular diversity characterization (e.g. crypticism) often requires the building of exploratory phylogenetic trees with reference taxa. The subsequent step of segregating DNA sequences of interest based on observed topological relationships can represent a challenging task, especially for large datasets. We have written TREE2FASTA, a Perl script that enables and expedites the sorting of FASTA-formatted sequence data from exploratory phylogenetic trees. TREE2FASTA takes advantage of the interactive, rapid point-and-click color selection and/or annotations of tree leaves in the popular Java tree-viewer FigTree to segregate groups of FASTA sequences of interest to separate files. TREE2FASTA allows for both simple and nested segregation designs to facilitate the simultaneous preparation of multiple data sets that may overlap in sequence content.

  1. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    PubMed

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  2. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    PubMed Central

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306

  3. Couple decision making and use of cultural scripts in Malawi.

    PubMed

    Mbweza, Ellen; Norr, Kathleen F; McElmurry, Beverly

    2008-01-01

    To examine the decision-making processes of husband and wife dyads in matrilineal and patrilineal marriage traditions of Malawi in the areas of money, food, pregnancy, contraception, and sexual relations. Qualitative grounded theory using simultaneous interviews of 60 husbands and wives (30 couples). Data were analyzed according to the guidelines of simultaneous data collection and analysis. The analysis resulted in development of core categories and categories of decision-making process. Data matrixes were used to identify similarities and differences within couples and across cases. Most couples reported using a mix of final decision-making approaches: husband-dominated, wife-dominated, and shared. Gender based and nongender based cultural scripts provided rationales for their approaches to decision making. Gender based cultural scripts (husband-dominant and wife-dominant) were used to justify decision-making approaches. Non-gender based cultural scripts (communicating openly, maintaining harmony, and children's welfare) supported shared decision making. Gender based cultural scripts were used in decision making more often among couples from the district with a patrilineal marriage tradition and where the husband had less than secondary school education and was not formally employed. Nongender based cultural scripts to encourage shared decision making can be used in designing culturally tailored reproductive health interventions for couples. Nurses who work with women and families should be aware of the variations that occur in actual couple decision-making approaches. Shared decision making can be used to encourage the involvement of men in reproductive health programs.

  4. MedlinePlus FAQ: RSS Service

    MedlinePlus

    ... rss.html Question: Do you have a Really Simple Syndication (RSS) feed for MedlinePlus? To use the sharing features on this page, please enable JavaScript. Answer: MedlinePlus offers a variety of RSS feeds to suit your particular interests. You can subscribe to general interest feeds that ...

  5. Microfluidization of Graphite and Formulation of Graphene-Based Conductive Inks

    PubMed Central

    2017-01-01

    We report the exfoliation of graphite in aqueous solutions under high shear rate [∼ 108 s–1] turbulent flow conditions, with a 100% exfoliation yield. The material is stabilized without centrifugation at concentrations up to 100 g/L using carboxymethylcellulose sodium salt to formulate conductive printable inks. The sheet resistance of blade coated films is below ∼2Ω/□. This is a simple and scalable production route for conductive inks for large-area printing in flexible electronics. PMID:28102670

  6. HPLC-Assisted Automated Oligosaccharide Synthesis: Implementation of the Autosampler as a Mode of the Reagent Delivery.

    PubMed

    Pistorio, Salvatore G; Nigudkar, Swati S; Stine, Keith J; Demchenko, Alexei V

    2016-10-07

    The development of a useful methodology for simple, scalable, and transformative automation of oligosaccharide synthesis that easily interfaces with existing methods is reported. The automated synthesis can now be performed using accessible equipment where the reactants and reagents are delivered by the pump or the autosampler and the reactions can be monitored by the UV detector. The HPLC-based platform for automation is easy to setup and adapt to different systems and targets.

  7. TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data.

    PubMed

    Clark, Lindsay V; Sacks, Erik J

    2016-01-01

    In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume hard drive space with intermediate files, and does not require programming skill to use.

  8. Novel Technology for Treating Individuals with Aphasia and Concomitant Cognitive Deficits

    PubMed Central

    Cherney, Leora R.; Halper, Anita S.

    2009-01-01

    Purpose This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-the-art computer software for training conversational scripts. Method Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. Results There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of script-related words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Conclusion Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits. PMID:19158062

  9. Collaboration Scripts for Enhancing Metacognitive Self-Regulation and Mathematics Literacy

    ERIC Educational Resources Information Center

    Chen, Cheng-Huan; Chiu, Chiung-Hui

    2016-01-01

    This study designed a set of computerized collaboration scripts for multi-touch supported collaborative design-based learning and evaluated its effects on multiple aspects of metacognitive self-regulation in terms of planning and controlling and mathematical literacy achievement at higher and lower levels. The computerized scripts provided a…

  10. SeqDepot: streamlined database of biological sequences and precomputed features.

    PubMed

    Ulrich, Luke E; Zhulin, Igor B

    2014-01-15

    Assembling and/or producing integrated knowledge of sequence features continues to be an onerous and redundant task despite a large number of existing resources. We have developed SeqDepot-a novel database that focuses solely on two primary goals: (i) assimilating known primary sequences with predicted feature data and (ii) providing the most simple and straightforward means to procure and readily use this information. Access to >28.5 million sequences and 300 million features is provided through a well-documented and flexible RESTful interface that supports fetching specific data subsets, bulk queries, visualization and searching by MD5 digests or external database identifiers. We have also developed an HTML5/JavaScript web application exemplifying how to interact with SeqDepot and Perl/Python scripts for use with local processing pipelines. Freely available on the web at http://seqdepot.net/. RESTaccess via http://seqdepot.net/api/v1. Database files and scripts maybe downloaded from http://seqdepot.net/download.

  11. Automatic script identification from images using cluster-based templates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Kerns, L.; Kelly, P.

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less

  12. Tools for Integrating Data Access from the IRIS DMC into Research Workflows

    NASA Astrophysics Data System (ADS)

    Reyes, C. G.; Suleiman, Y. Y.; Trabant, C.; Karstens, R.; Weertman, B. R.

    2012-12-01

    Web service interfaces at the IRIS Data Management Center (DMC) provide access to a vast archive of seismological and related geophysical data. These interfaces are designed to easily incorporate data access into data processing workflows. Examples of data that may be accessed include: time series data, related metadata, and earthquake information. The DMC has developed command line scripts, MATLAB® interfaces and a Java library to support a wide variety of data access needs. Users of these interfaces do not need to concern themselves with web service details, networking, or even (in most cases) data conversion. Fetch scripts allow access to the DMC archive and are a comfortable fit for command line users. These scripts are written in Perl and are well suited for automation and integration into existing workflows on most operating systems. For metdata and event information, the Fetch scripts even parse the returned data into simple text summaries. The IRIS Java Web Services Library (IRIS-WS Library) allows Java developers the ability to create programs that access the DMC archives seamlessly. By returning the data and information as native Java objects the Library insulates the developer from data formats, network programming and web service details. The MATLAB interfaces leverage this library to allow users access to the DMC archive directly from within MATLAB (r2009b or newer), returning data into variables for immediate use. Data users and research groups are developing other toolkits that use the DMC's web services. Notably, the ObsPy framework developed at LMU Munich is a Python Toolbox that allows seamless access to data and information via the DMC services. Another example is the MATLAB-based GISMO and Waveform Suite developments that can now access data via web services. In summary, there now exist a host of ways that researchers can bring IRIS DMC data directly into their workflows. MATLAB users can use irisFetch.m, command line users can use the various Fetch scripts, Java users can use the IRIS-WS library, and Python users may request data through ObsPy. To learn more about any of these clients see http://www.iris.edu/ws/wsclients/.

  13. A Simple and Scalable Fabrication Method for Organic Electronic Devices on Textiles.

    PubMed

    Ismailov, Usein; Ismailova, Esma; Takamatsu, Seiichi

    2017-03-13

    Today, wearable electronics devices combine a large variety of functional, stretchable, and flexible technologies. However, in many cases, these devices cannot be worn under everyday conditions. Therefore, textiles are commonly considered the best substrate to accommodate electronic devices in wearable use. In this paper, we describe how to selectively pattern organic electroactive materials on textiles from a solution in an easy and scalable manner. This versatile deposition technique enables the fabrication of wearable organic electronic devices on clothes.

  14. Scalable, Stereocontrolled Total Syntheses of (±)–Axinellamines A and B

    PubMed Central

    Su, Shun; Rodriguez, Rodrigo A.; Baran, Phil S.

    2011-01-01

    The development of a simple, efficient, scalable, and stereocontrolled synthesis of a common intermediate en route to the axinellamines, massadines, and palau’amine is reported. This completely new route was utilized to prepare the axinellamines on a gram scale. In a more general sense, three distinct and enabling methodological advances were made during these studies: 1. ethylene glycol-assisted Pauson-Khand cycloaddition reaction, 2. a Zn/In-mediated Barbier type reaction, and 3. a TfNH2-assisted chlorination-spirocyclization. PMID:21846138

  15. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  16. An open source, web based, simple solution for seismic data dissemination and collaborative research

    NASA Astrophysics Data System (ADS)

    Diviacco, Paolo

    2005-06-01

    Collaborative research and data dissemination in the field of geophysical exploration need network tools that can access large amounts of data from anywhere using any PC or workstation. Simple solutions based on a combination of Open Source software can be developed to address such requests, exploiting the possibilities offered by the web technologies, and at the same time avoiding the costs and inflexibility of commercial systems. A viable solution consists of MySQL for data storage and retrieval, CWP/SU and GMT for data visualisation and a scripting layer driven by PHP that allows users to access the system via an Apache web server. In the light of the experience building the on-line archive of seismic data of the Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), we describe the solutions and the methods adopted, with a view to stimulate both the attitude of network collaborative research of other institutions similar to ours, and the development of different applications.

  17. A Survey of Complex Object Technologies for Digital Libraries

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Argue, Brad; Efron, Miles; Denn, Sheila; Pattuelli, Maria Cristina

    2001-01-01

    Many early web-based digital libraries (DLs) had implicit assumptions reflected in their architecture that the unit of focus in the DL (frequently "reports" or "e-prints") would only be manifested in a single, or at most a few, common file formats such as PDF or PostScript. DLs have now matured to the point where their contents are commonly no longer simple files. Complex objects in DLs have emerged from in response to various requirements, including: simple aggregation of formats and supporting files, bundling additional information to aid digital preservation, creating opaque digital objects for e-commerce applications, and the incorporation of dynamic services with the traditional data files. We examine a representative (but not necessarily exhaustive) number of current and recent historical web-based complex object technologies and projects that are applicable to DLs: Aurora, Buckets, ComMentor, Cryptolopes, Digibox, Document Management Alliance, FEDORA, Kahn-Wilensky Framework Digital Objects, Metadata Encoding & Transmission Standard, Multivalent Documents, Open eBooks, VERS Encapsulated Objects, and the Warwick Framework.

  18. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.

    1998-12-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.

  19. Script identification from images using cluster-based templates

    DOEpatents

    Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.

    1998-01-01

    A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.

  20. Game-based e-learning is more effective than a conventional instructional method: a randomized controlled trial with third-year medical students.

    PubMed

    Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander

    2013-01-01

    When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction.

  1. Game-Based E-Learning Is More Effective than a Conventional Instructional Method: A Randomized Controlled Trial with Third-Year Medical Students

    PubMed Central

    Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander

    2013-01-01

    Background When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. Objectives To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. Methods A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. Results The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Conclusions Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction. PMID:24349257

  2. Captioning Effects on Television News Learning.

    ERIC Educational Resources Information Center

    Reese, Stephen D.; Davie, William R.

    Noting that the use of captions in television newscasts has grown from simple labeling of newsmakers to more complicated titling of graphics and enumerating important points in a script, a study examined the extent to which captioning assisted viewers in learning from different types of television news stories. Subjects, 100 undergraduate…

  3. A simple language to script and simulate breeding schemes: the breeding scheme language

    USDA-ARS?s Scientific Manuscript database

    It is difficult for plant breeders to determine an optimal breeding strategy given that the problem involves many factors, such as target trait genetic architecture and breeding resource availability. There are many possible breeding schemes for each breeding program. Although simulation study may b...

  4. Script-like attachment representations in dreams containing current romantic partners.

    PubMed

    Selterman, Dylan; Apetroaia, Adela; Waters, Everett

    2012-01-01

    Recent research has demonstrated parallels between romantic attachment styles and general dream content. The current study examined partner-specific attachment representations alongside dreams that contained significant others. The general prediction was that dreams would follow the "secure base script," and a general correspondence would emerge between secure attachment cognitions in waking life and in dreams. Sixty-one undergraduate student participants in committed dating relationships of six months duration or longer completed the Secure Base Script Narrative Assessment at Time 1, and then completed a dream diary for 14 consecutive days. Blind coders scored dreams that contained significant others using the same criteria for secure base content in laboratory narratives. Results revealed a significant association between relationship-specific attachment security and the degree to which dreams about romantic partners followed the secure base script. The findings illuminate our understanding of mental representations with regards to specific attachment figures. Implications for attachment theory and clinical applications are discussed.

  5. Screen printing as a scalable and low-cost approach for rigid and flexible thin-film transistors using separated carbon nanotubes.

    PubMed

    Cao, Xuan; Chen, Haitian; Gu, Xiaofei; Liu, Bilu; Wang, Wenli; Cao, Yu; Wu, Fanqi; Zhou, Chongwu

    2014-12-23

    Semiconducting single-wall carbon nanotubes are very promising materials in printed electronics due to their excellent mechanical and electrical property, outstanding printability, and great potential for flexible electronics. Nonetheless, developing scalable and low-cost approaches for manufacturing fully printed high-performance single-wall carbon nanotube thin-film transistors remains a major challenge. Here we report that screen printing, which is a simple, scalable, and cost-effective technique, can be used to produce both rigid and flexible thin-film transistors using separated single-wall carbon nanotubes. Our fully printed top-gated nanotube thin-film transistors on rigid and flexible substrates exhibit decent performance, with mobility up to 7.67 cm2 V(-1) s(-1), on/off ratio of 10(4)∼10(5), minimal hysteresis, and low operation voltage (<10 V). In addition, outstanding mechanical flexibility of printed nanotube thin-film transistors (bent with radius of curvature down to 3 mm) and driving capability for organic light-emitting diode have been demonstrated. Given the high performance of the fully screen-printed single-wall carbon nanotube thin-film transistors, we believe screen printing stands as a low-cost, scalable, and reliable approach to manufacture high-performance nanotube thin-film transistors for application in display electronics. Moreover, this technique may be used to fabricate thin-film transistors based on other materials for large-area flexible macroelectronics, and low-cost display electronics.

  6. ACME Priority Metrics (A-PRIME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Zender, Charlie; Van Roekel, Luke

    A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.

  7. Dehydration Polymerization for Poly(hetero)arene Conjugated Polymers.

    PubMed

    Mirabal, Rafael A; Vanderzwet, Luke; Abuadas, Sara; Emmett, Michael R; Schipper, Derek

    2018-02-18

    The lack of scalable and sustainable methods to prepare conjugated polymers belies their importance in many enabling technologies. Accessing high-performance poly(hetero)arene conjugated polymers by dehydration has remained an unsolved problem in synthetic chemistry and has historically required transitional-metal coupling reactions. Herein, we report a dehydration method that allows access to conjugated heterocyclic materials. By using the technique, we have prepared a series of small molecules and polymers. The reaction avoids using transition metals, proceeds at room temperature, the only required reactant is a simple base and water is the sole by-product. The dehydration reaction is technically simple and provides a sustainable and straightforward method to prepare conjugated heteroarene motifs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Transformation of topic-specific professional knowledge into personal pedagogical content knowledge through lesson planning

    NASA Astrophysics Data System (ADS)

    Stender, Anita; Brückmann, Maja; Neumann, Knut

    2017-08-01

    This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson planning: When planning lessons, teachers use their TSPK to create lesson plans. The implementation of these lesson plans and teachers' reflection upon them lead to their improvement. Gradually, successful lesson plans are mentally stored as teaching scripts and can easily be retrieved during instruction. This process is affected by teacher's beliefs, motivation and self-regulation. In order to examine the influence of TSPK on teaching scripts as well as the moderating effects of beliefs, motivation and self-regulation, we conducted a cross-sectional study with n = 49 in-service teachers in physics. The TSPK, beliefs, motivation, self-regulation and the quality of teaching scripts of in-service teachers were assessed by using an online questionnaire adapted to teaching the force concept and Newton's law for 9th grade instruction. Based on the measurement of the quality of teaching scripts, the results provide evidence that TSPK influences the quality of teaching scripts. Motivation and self-regulation moderate this influence.

  9. Processing Information about Support Exchanges in Close Relationships: The Role of a Knowledge Structure.

    PubMed

    Turan, Bulent

    2016-01-01

    People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health.

  10. Processing Information about Support Exchanges in Close Relationships: The Role of a Knowledge Structure

    PubMed Central

    Turan, Bulent

    2016-01-01

    People develop knowledge of interpersonal interaction patterns (e.g., prototypes and schemas), which shape how they process incoming information. One such knowledge structure based on attachment theory was examined: the secure base script (the prototypic sequence of events when an attachment figure comforts a close relationship partner in distress). In two studies (N = 53 and N = 119), participants were shown animated film clips in which geometric figures depicted the secure base script and asked to describe the animations. Both studies found that many people readily recognize the secure-base script from these minimal cues quite well, suggesting that this script is not only available in the context of specific relationships (i.e., a relationship-specific knowledge): The generalized (abstract) structure of the script is also readily accessible, which would make it possible to apply it to any relationship (including new relationships). Regression analyses suggested that participants who recognized the script were more likely to (a) include more animation elements when describing the animations, (b) see a common theme in different animations, (c) create better organized stories, and (d) later recall more details of the animations. These findings suggest that access to this knowledge structure helps a person organize and remember relevant incoming information. Furthermore, in both Study 1 and Study 2, individual differences in the ready recognition of the script were associated with individual differences in having access to another related knowledge: indicators suggesting that a potential relationship partner can be trusted to be supportive and responsive at times of stress. Results of Study 2 also suggest that recognizing the script is associated with those items of an attachment measure that concern giving and receiving support. Thus, these knowledge structures may shape how people process support-relevant information in their everyday lives, potentially affecting relationship outcomes and mental and physical health. PMID:26973562

  11. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  12. Neural substrates of embodied natural beauty and social endowed beauty: An fMRI study.

    PubMed

    Zhang, Wei; He, Xianyou; Lai, Siyan; Wan, Juan; Lai, Shuxian; Zhao, Xueru; Li, Darong

    2017-08-02

    What are the neural mechanisms underlying beauty based on objective parameters and beauty based on subjective social construction? This study scanned participants with fMRI while they performed aesthetic judgments on concrete pictographs and abstract oracle bone scripts. Behavioral results showed both pictographs and oracle bone scripts were judged to be more beautiful when they referred to beautiful objects and positive social meanings, respectively. Imaging results revealed regions associated with perceptual, cognitive, emotional and reward processing were commonly activated both in beautiful judgments of pictographs and oracle bone scripts. Moreover, stronger activations of orbitofrontal cortex (OFC) and motor-related areas were found in beautiful judgments of pictographs, whereas beautiful judgments of oracle bone scripts were associated with putamen activity, implying stronger aesthetic experience and embodied approaching for beauty were elicited by the pictographs. In contrast, only visual processing areas were activated in the judgments of ugly pictographs and negative oracle bone scripts. Results provide evidence that the sense of beauty is triggered by two processes: one based on the objective parameters of stimuli (embodied natural beauty) and the other based on the subjective social construction (social endowed beauty).

  13. Novel technology for treating individuals with aphasia and concomitant cognitive deficits.

    PubMed

    Cherney, Leora R; Halper, Anita S

    2008-01-01

    This article describes three individuals with aphasia and concomitant cognitive deficits who used state-of-theart computer software for training conversational scripts. Participants were assessed before and after 9 weeks of a computer script training program. For each participant, three individualized scripts were developed, recorded on the software, and practiced sequentially at home. Weekly meetings with the speech-language pathologist occurred to monitor practice and assess progress. Baseline and posttreatment scripts were audiotaped, transcribed, and compared to the target scripts for content, grammatical productivity, and rate of production of script-related words. Interviews were conducted at the conclusion of treatment. There was great variability in improvements across scripts, with two participants improving on two of their three scripts in measures of content, grammatical productivity, and rate of production of scriptrelated words. One participant gained more than 5 points on the Aphasia Quotient of the Western Aphasia Battery. Five positive themes were consistently identified from exit interviews: increased verbal communication, improvements in other modalities and situations, communication changes noticed by others, increased confidence, and satisfaction with the software. Computer-based script training potentially may be an effective intervention for persons with chronic aphasia and concomitant cognitive deficits.

  14. Using NetCloak to develop server-side Web-based experiments without writing CGI programs.

    PubMed

    Wolfe, Christopher R; Reyna, Valerie F

    2002-05-01

    Server-side experiments use the Web server, rather than the participant's browser, to handle tasks such as random assignment, eliminating inconsistencies with JAVA and other client-side applications. Heretofore, experimenters wishing to create server-side experiments have had to write programs to create common gateway interface (CGI) scripts in programming languages such as Perl and C++. NetCloak uses simple, HTML-like commands to create CGIs. We used NetCloak to implement an experiment on probability estimation. Measurements of time on task and participants' IP addresses assisted quality control. Without prior training, in less than 1 month, we were able to use NetCloak to design and create a Web-based experiment and to help graduate students create three Web-based experiments of their own.

  15. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  16. Increasing Play-Based Commenting in Children with Autism Spectrum Disorder Using a Novel Script-Frame Procedure

    ERIC Educational Resources Information Center

    Groskreutz, Mark P.; Peters, Amy; Groskreutz, Nicole C.; Higbee, Thomas S.

    2015-01-01

    Children with developmental disabilities may engage in less frequent and more repetitious language than peers with typical development. Scripts have been used to increase communication by teaching one or more specific statements and then fading the scripts. In the current study, preschoolers with developmental disabilities experienced a novel…

  17. The Use of Interactive Whiteboards in Teaching Non-Roman Scripts

    ERIC Educational Resources Information Center

    Tozcu, Anjel

    2008-01-01

    This study explores the use of the interactive whiteboards in teaching the non-Latin based orthographies of Hindi, Pashto, Dari, Persian (Farsi), and Hebrew. All these languages use non-roman scripts, and except for Hindi, they are cursive. Thus, letters within words are connected and for beginners the script may look quite complicated,…

  18. A video depicting resuscitation did not impact upon patients' decision-making.

    PubMed

    Richardson-Royer, Caitlin; Naqvi, Imran; Riffel, Christopher; Harvey, Lawrence; Smith, Domonique; Ayalew, Dagmawe; Motayar, Nasim; Amoateng-Adjepong, Yaw; Manthous, Constantine A

    2018-01-01

    Previous studies have demonstrated that video of and scripted information about cardiopulmonary resuscitation (CPR) can be deployed during clinician-patient end-of-life discussions. Few studies, however, examine whether video adds to verbal information-sharing. We hypothesized that video augments script-only decision-making. Patients aged >65 years admitted to hospital wards were randomized to receive evidence-based information ("script") vs. script plus video of simulated CPR and intubation. Patients' decisions registered in the hospital record, by hospital discharge were compared for the two groups. Fifty script-only intervention patients averaging 77.7 years were compared to 50 script+video patients with a mean age of 74.7 years. Eleven of 50 (22%) in each group declined CPR; and an additional three (script) vs. four (script+video) refused intubation for respiratory failure. There were no differences in sex, self-reported health trajectory, functional limitations, length of stay, or mortality associated with decisions. The rate at which verbally informed hospitalized elders opted out of resuscitation was not impacted by adding a video depiction of CPR.

  19. Parents, peers and pornography: the influence of formative sexual scripts on adult HIV sexual risk behaviour among Black men in the USA.

    PubMed

    Hussen, Sophia A; Bowleg, Lisa; Sangaramoorthy, Thurka; Malebranche, David J

    2012-01-01

    Black men in the USA experience disproportionately high rates of HIV infection, particularly in the Southeastern part of the country. We conducted 90 qualitative in-depth interviews with Black men living in the state of Georgia and analysed the transcripts using Sexual Script Theory to: (1) characterise the sources and content of sexual scripts that Black men were exposed to during their childhood and adolescence and (2) describe the potential influence of formative scripts on adult HIV sexual risk behaviour. Our analyses highlighted salient sources of cultural scenarios (parents, peers, pornography, sexual education and television), interpersonal scripts (early sex- play, older female partners, experiences of child abuse) and intrapsychic scripts that participants described. Stratification of participant responses based on sexual-risk behaviour revealed that lower- and higher-risk men described exposure to similar scripts during their formative years; however, lower-risk men reported an ability to cognitively process and challenge the validity of risk-promoting scripts that they encountered. Implications for future research are discussed.

  20. Omics Metadata Management Software v. 1 (OMMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less

  1. A programmable rules engine to provide clinical decision support using HTML forms.

    PubMed Central

    Heusinkveld, J.; Geissbuhler, A.; Sheshelidze, D.; Miller, R.

    1999-01-01

    The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser. Images Figure 1 PMID:10566470

  2. A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling †

    PubMed Central

    Verberne, Anthony J. M.

    2016-01-01

    Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure. PMID:28952589

  3. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  4. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.

    PubMed

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-05-12

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.

  5. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks

    PubMed Central

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-01-01

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244

  6. Nanoscale thermocapillarity enabled purification for horizontally aligned arrays of single walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Jin, Sung Hun; Dunham, Simon; Xie, Xu; Rogers, John A.

    2015-09-01

    Among the remarkable variety of semiconducting nanomaterials that have been discovered over the past two decades, single-walled carbon nanotubes remain uniquely well suited for applications in high-performance electronics, sensors and other technologies. The most advanced opportunities demand the ability to form perfectly aligned, horizontal arrays of purely semiconducting, chemically pristine carbon nanotubes. Here, we present strategies that offer this capability. Nanoscale thermos-capillary flows in thin-film organic coatings followed by reactive ion etching serve as highly efficient means for selectively removing metallic carbon nanotubes from electronically heterogeneous aligned arrays grown on quartz substrates. The low temperatures and unusual physics associated with this process enable robust, scalable operation, with clear potential for practical use. Especially for the purpose of selective joule heating over only metallic nanotubes, two representative platforms are proposed and confirmed. One is achieved by selective joule heating associated with thin film transistors with partial gate structure. The other is based on a simple, scalable, large-area scheme through microwave irradiation by using micro-strip dipole antennas of low work-function metals. In this study, based on purified semiconducting SWNTs, we demonstrated field effect transistors with mobility (> 1,000 cm2/Vsec) and on/off switching ratio (~10,000) with current outputs in the milliamp range. Furthermore, as one demonstration of the effectiveness over large area-scalability and simplicity, implementing the micro-wave based purification, on large arrays consisting of ~20,000 SWNTs completely removes all of the m-SWNTs (~7,000) to yield a purity of s-SWNTs that corresponds, quantitatively, to at least to 99.9925% and likely significantly higher.

  7. A scalable healthcare information system based on a service-oriented architecture.

    PubMed

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  8. Ray Bradbury: Hieroglyphics of the Future.

    ERIC Educational Resources Information Center

    Braniff, Beverly S.

    2002-01-01

    Discusses how Ray Bradbury's script for the one hundredth episode of "The Twilight Zone" and the short story of the same title, "I Sing the Body Electric," rings so true in 2002. Notes that this "simple story" has never failed to generate some of the best discussions and papers. Discusses how the author teaches this…

  9. Static and Current Electricity.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.; Murtha, Kathy T.

    This is a copy of the script for the electrical relationships unit in an auto-tutorial physical science course for non-science majors, offered at the University of Maine at Orono. The unit includes 15 simple experiments designed to allow the student to discover various fundamental electrical relationships. The student has the option of reading the…

  10. LBNL Neutrino Astrophysics

    Science.gov Websites

    The KATRIN experiment The KATRIN experiment is designed to make a direct measurement of the mass experiment, scaled up by an order of magnitude in size, precision and tritium source intensity from previous experiments. Visit the experiment home page for more information. Gallery SimpleViewer requires JavaScript and

  11. A general UNIX interface for biocomputing and network information retrieval software.

    PubMed

    Kiong, B K; Tan, T W

    1993-10-01

    We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).

  12. BiDiBlast: comparative genomics pipeline for the PC.

    PubMed

    de Almeida, João M G C F

    2010-06-01

    Bi-directional BLAST is a simple approach to detect, annotate, and analyze candidate orthologous or paralogous sequences in a single go. This procedure is usually confined to the realm of customized Perl scripts, usually tuned for UNIX-like environments. Porting those scripts to other operating systems involves refactoring them, and also the installation of the Perl programming environment with the required libraries. To overcome these limitations, a data pipeline was implemented in Java. This application submits two batches of sequences to local versions of the NCBI BLAST tool, manages result lists, and refines both bi-directional and simple hits. GO Slim terms are attached to hits, several statistics are derived, and molecular evolution rates are estimated through PAML. The results are written to a set of delimited text tables intended for further analysis. The provided graphic user interface allows a friendly interaction with this application, which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the GNU GPL license. Copyright 2010 Beijing Genomics Institute. Published by Elsevier Ltd. All rights reserved.

  13. Running Gaussian16 Software Jobs on the Peregrine System | High-Performance

    Science.gov Websites

    , parallel setup is taken care of automatically based on settings in the PBS script example below. Previous filesystem called /dev/shm. This scratch space is set automatically by the example script below. The Gaussian system. An example script for batch submission is given below. #!/bin/bash #PBS -l nodes=2 #PBS -l

  14. Peer Review-Based Scripted Collaboration to Support Domain-Specific and Domain-General Knowledge Acquisition in Computer Science

    ERIC Educational Resources Information Center

    Demetriadis, Stavros; Egerter, Tina; Hanisch, Frank; Fischer, Frank

    2011-01-01

    This study investigates the effectiveness of using peer review in the context of scripted collaboration to foster both domain-specific and domain-general knowledge acquisition in the computer science domain. Using a one-factor design with a script and a control condition, students worked in small groups on a series of computer science problems…

  15. Prediction of Sexual Assault Experiences in College Women Based on Rape Scripts: A Prospective Analysis

    ERIC Educational Resources Information Center

    Turchik, Jessica A.; Probst, Danielle R.; Irvin, Clinton R.; Chau, Minna; Gidycz, Christine A.

    2009-01-01

    Although script theory has been applied to sexual assault (e.g., H. Frith & C. Kitzinger, 2001; A. S. Kahn, V. A. Andreoli Mathie, & C. Torgler, 1994), women's scripts of rape have not been examined in relation to predicting sexual victimization experiences. The purpose of the current study was to examine how elements of women's sexual assault…

  16. Cultural scripts for a good death in Japan and the United States: similarities and differences.

    PubMed

    Long, Susan Orpett

    2004-03-01

    Japan and the United States are both post-industrial societies, characterised by distinct trajectories of dying. Both contain multiple "cultural scripts" of the good death. Seale (Constructing Death: the Sociology of Dying and Bereavement, Cambridge University Press, Cambridge, 1998) has identified at least four "cultural scripts", or ways to die well, that are found in contemporary anglophone countries: modern medicine, revivalism, an anti-revivalist script and a religious script. Although these scripts can also be found in Japan, different historical experiences and religious traditions provide a context in which their content and interpretation sometimes differ from those of the anglophone countries. To understand ordinary people's ideas about dying well and dying poorly, we must recognise not only that post-industrial society offers multiple scripts and varying interpretive frameworks, but also that people actively select from among them in making decisions and explaining their views. Moreover, ideas and metaphors may be based on multiple scripts simultaneously or may offer different interpretations for different social contexts. Based on ethnographic fieldwork in both countries, this paper explores the metaphors that ordinary patients and caregivers draw upon as they use, modify, combine or ignore these cultural scripts of dying. Ideas about choice, time, place and personhood, elements of a good death that were derived inductively from interviews, are described. These Japanese and American data suggest somewhat different concerns and assumptions about human life and the relation of the person to the wider social world, but indicate similar concerns about the process of medicalised dying and the creation of meaning for those involved. While cultural differences do exist, they cannot be explained by reference to 'an American' and 'a Japanese' way to die. Rather, the process of creating and maintaining cultural scripts requires the active participation of ordinary people as they in turn respond to the constraints of post-industrial technology, institutions, demographics and notions of self.

  17. Attachment to Mother and Father at Transition to Middle Childhood.

    PubMed

    Di Folco, Simona; Messina, Serena; Zavattini, Giulio Cesare; Psouni, Elia

    2017-01-01

    The present study investigated concordance between representations of attachment to mother and attachment to father, and convergence between two narrative-based methods addressing these representations in middle childhood: the Manchester Child Attachment Story Task (MCAST) and the Secure Base Script Test (SBST). One hundred and twenty 6-year-old children were assessed by separate administrations of the MCAST for mother and father, respectively, and results showed concordance of representations of attachment to mother and attachment to father at age 6.5 years. 75 children were additionally tested about 12 months later, with the SBST, which assesses scripted knowledge of secure base (and safe haven), not differentiating between mother and father attachment relationships. Concerning attachment to father, dichotomous classifications (MCAST) and a continuous dimension capturing scripted secure base knowledge (MCAST) converged with secure base scriptedness (SBST), yet we could not show the same pattern of convergence concerning attachment to mother. Results suggest some convergence between the two narrative methods of assessment of secure base script but also highlight complications when using the MCAST for measuring attachment to father in middle childhood.

  18. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  19. Family cumulative risk and at-risk kindergarteners' social competence: the mediating role of parent representations of the attachment relationship.

    PubMed

    Sparks, Lauren A; Trentacosta, Christopher J; Owusu, Erika; McLear, Caitlin; Smith-Darden, Joanne

    2018-08-01

    Secure attachment relationships have been linked to social competence in at-risk children. In the current study, we examined the role of parent secure base scripts in predicting at-risk kindergarteners' social competence. Parent representations of secure attachment were hypothesized to mediate the relationship between lower family cumulative risk and children's social competence. Participants included 106 kindergarteners and their primary caregivers recruited from three urban charter schools serving low-income families as a part of a longitudinal study. Lower levels of cumulative risk predicted greater secure attachment representations in parents, and scores on the secure base script assessment predicted children's social competence. An indirect relationship between lower cumulative risk and kindergarteners' social competence via parent secure base script scores was also supported. Parent script-based representations of the attachment relationship appear to be an important link between lower levels of cumulative risk and low-income kindergarteners' social competence. Implications of these findings for future interventions are discussed.

  20. Mothers' electrophysiological, subjective, and observed emotional responding to infant crying: The role of secure base script knowledge.

    PubMed

    Groh, Ashley M; Roisman, Glenn I; Haydon, Katherine C; Bost, Kelly; McElwain, Nancy; Garcia, Leanna; Hester, Colleen

    2015-11-01

    This study examined the extent to which secure base script knowledge-reflected in the ability to generate narratives in which attachment-relevant events are encountered, a clear need for assistance is communicated, competent help is provided and accepted, and the problem is resolved-is associated with mothers' electrophysiological, subjective, and observed emotional responses to an infant distress vocalization. While listening to an infant crying, mothers (N = 108, M age = 34 years) lower on secure base script knowledge exhibited smaller shifts in relative left (vs. right) frontal EEG activation from rest, reported smaller reductions in feelings of positive emotion from rest, and expressed greater levels of tension. Findings indicate that lower levels of secure base script knowledge are associated with an organization of emotional responding indicative of a less flexible and more emotionally restricted response to infant distress. Discussion focuses on the contribution of mothers' attachment representations to their ability to effectively manage emotional responding to infant distress in a manner expected to support sensitive caregiving.

  1. High-order UWB pulses scheme to generate multilevel modulation formats based on incoherent optical sources.

    PubMed

    Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2013-11-18

    We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.

  2. Robust Polypropylene Fabrics Super-Repelling Various Liquids: A Simple, Rapid and Scalable Fabrication Method by Solvent Swelling.

    PubMed

    Zhu, Tang; Cai, Chao; Duan, Chunting; Zhai, Shuai; Liang, Songmiao; Jin, Yan; Zhao, Ning; Xu, Jian

    2015-07-01

    A simple, rapid (10 s) and scalable method to fabricate superhydrophobic polypropylene (PP) fabrics is developed by swelling the fabrics in cyclohexane/heptane mixture at 80 °C. The recrystallization of the swollen macromolecules on the fiber surface contributes to the formation of submicron protuberances, which increase the surface roughness dramatically and result in superhydrophobic behavior. The superhydrophobic PP fabrics possess excellent repellency to blood, urine, milk, coffee, and other common liquids, and show good durability and robustness, such as remarkable resistances to water penetration, abrasion, acidic/alkaline solution, and boiling water. The excellent comprehensive performance of the superhydrophobic PP fabrics indicates their potential applications as oil/water separation materials, protective garments, diaper pads, or other medical and health supplies. This simple, fast and low cost method operating at a relatively low temperature is superior to other reported techniques for fabricating superhydrophobic PP materials as far as large scale manufacturing is considered. Moreover, the proposed method is applicable for preparing superhydrophobic PP films and sheets as well.

  3. WarpEngine, a Flexible Platform for Distributed Computing Implemented in the VEGA Program and Specially Targeted for Virtual Screening Studies.

    PubMed

    Pedretti, Alessandro; Mazzolari, Angelica; Vistoli, Giulio

    2018-05-21

    The manuscript describes WarpEngine, a novel platform implemented within the VEGA ZZ suite of software for performing distributed simulations both in local and wide area networks. Despite being tailored for structure-based virtual screening campaigns, WarpEngine possesses the required flexibility to carry out distributed calculations utilizing various pieces of software, which can be easily encapsulated within this platform without changing their source codes. WarpEngine takes advantages of all cheminformatics features implemented in the VEGA ZZ program as well as of its largely customizable scripting architecture thus allowing an efficient distribution of various time-demanding simulations. To offer an example of the WarpEngine potentials, the manuscript includes a set of virtual screening campaigns based on the ACE data set of the DUD-E collections using PLANTS as the docking application. Benchmarking analyses revealed a satisfactory linearity of the WarpEngine performances, the speed-up values being roughly equal to the number of utilized cores. Again, the computed scalability values emphasized that a vast majority (i.e., >90%) of the performed simulations benefit from the distributed platform presented here. WarpEngine can be freely downloaded along with the VEGA ZZ program at www.vegazz.net .

  4. Universal Plug-n-Play Sensor Integration for Advanced Navigation

    DTIC Science & Technology

    2012-03-22

    Orientation (top) and Angular Velocity (bottom) . . . . . . . . . 79 IV.6 Execution of AHRS script with roscore running on separate machine . . . . . . 80...single host case only with two hosts in this scenario. The script is running 78 Figure IV.5: Plot of AHRS Orientation (top) and Angular Velocity (bottom...Component-Based System using ROS . . . . . . . . . 59 3.6 Autonomous Behavior Using Scripting . . . . . . . . . . . . . . . . . . . . 60 3.6.1 udev

  5. Self-assembled hierarchically structured organic-inorganic composite systems.

    PubMed

    Tritschler, Ulrich; Cölfen, Helmut

    2016-05-13

    Designing bio-inspired, multifunctional organic-inorganic composite materials is one of the most popular current research objectives. Due to the high complexity of biocomposite structures found in nacre and bone, for example, a one-pot scalable and versatile synthesis approach addressing structural key features of biominerals and affording bio-inspired, multifunctional organic-inorganic composites with advanced physical properties is highly challenging. This article reviews recent progress in synthesizing organic-inorganic composite materials via various self-assembly techniques and in this context highlights a recently developed bio-inspired synthesis concept for the fabrication of hierarchically structured, organic-inorganic composite materials. This one-step self-organization concept based on simultaneous liquid crystal formation of anisotropic inorganic nanoparticles and a functional liquid crystalline polymer turned out to be simple, fast, scalable and versatile, leading to various (multi-)functional composite materials, which exhibit hierarchical structuring over several length scales. Consequently, this synthesis approach is relevant for further progress and scientific breakthrough in the research field of bio-inspired and biomimetic materials.

  6. Scalable Functionalized Graphene Nano-platelets as Tunable Cathodes for High-performance Lithium Rechargeable Batteries

    PubMed Central

    Kim, Haegyeom; Lim, Hee-Dae; Kim, Sung-Wook; Hong, Jihyun; Seo, Dong-Hwa; Kim, Dae-chul; Jeon, Seokwoo; Park, Sungjin; Kang, Kisuk

    2013-01-01

    High-performance and cost-effective rechargeable batteries are key to the success of electric vehicles and large-scale energy storage systems. Extensive research has focused on the development of (i) new high-energy electrodes that can store more lithium or (ii) high-power nano-structured electrodes hybridized with carbonaceous materials. However, the current status of lithium batteries based on redox reactions of heavy transition metals still remains far below the demands required for the proposed applications. Herein, we present a novel approach using tunable functional groups on graphene nano-platelets as redox centers. The electrode can deliver high capacity of ~250 mAh g−1, power of ~20 kW kg−1 in an acceptable cathode voltage range, and provide excellent cyclability up to thousands of repeated charge/discharge cycles. The simple, mass-scalable synthetic route for the functionalized graphene nano-platelets proposed in this work suggests that the graphene cathode can be a promising new class of electrode. PMID:23514953

  7. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  8. OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu

    The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.

  9. Exfoliation of non-oxidized graphene flakes for scalable conductive film.

    PubMed

    Park, Kwang Hyun; Kim, Bo Hyun; Song, Sung Ho; Kwon, Jiyoung; Kong, Byung Seon; Kang, Kisuk; Jeon, Seokwoo

    2012-06-13

    The increasing demand for graphene has required a new route for its mass production without causing extreme damages. Here we demonstrate a simple and cost-effective intercalation based exfoliation method for preparing high quality graphene flakes, which form a stable dispersion in organic solvents without any functionalization and surfactant. Successful intercalation of alkali metal between graphite interlayers through liquid-state diffusion from ternary KCl-NaCl-ZnCl(2) eutectic system is confirmed by X-ray diffraction and X-ray photoelectric spectroscopy. Chemical composition and morphology analyses prove that the graphene flakes preserve their intrinsic properties without any degradation. The graphene flakes remain dispersed in a mixture of pyridine and salts for more than 6 months. We apply these results to produce transparent conducting (∼930 Ω/□ at ∼75% transmission) graphene films using the modified Langmuir-Blodgett method. The overall results suggest that our method can be a scalable (>1 g/batch) and economical route for the synthesis of nonoxidized graphene flakes.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasari, Venkateswara Rao

    The need for sustainable and secure nuclear energy is summarized. Driven by economics and public-private partnerships, the technology is evolving. Cost control and regulatory simplification are needed for a nuclear renaissance. Small modular reactors--simple, scalable, and inherently safe--may be the future.

  11. Mixed Messages: Inconsistent Sexual Scripts in Australian Teenage Magazines and Implications for Sexual Health Practices

    ERIC Educational Resources Information Center

    Burns, Melanie C.

    2018-01-01

    Condom use among Australian adolescents has been shown to be variable, despite good knowledge among this group about sexual health risks and the promotion of condoms as a simple way to reduce the spread of sexually transmitted infections. This study explores dominant constructions of condom use within two Australian lifestyle magazines targeted…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    "rsed" is an R package that contains tools for stream editing: manipulating text files by making insertions, replacements, deletions, substitutions, or commenting. It hails from the powerful Unix command, "sed". While the "rsed" package is not nearly as powerful as "see", it is much simpler to use. R programmers often write scripts that may require simple manipulation of text files. "rsed" addresses that need.

  13. Cross-species transferability and mapping of genomic and cDNA SSRs in pines

    Treesearch

    D. Chagne; P. Chaumeil; A. Ramboer; C. Collada; A. Guevara; M. T. Cervera; G. G. Vendramin; V. Garcia; J-M. Frigerio; Craig Echt; T. Richardson; Christophe Plomion

    2004-01-01

    Two unigene datasets of Pinus taeda and Pinus pinaster were screened to detect di-, tri and tetranucleotide repeated motifs using the SSRIT script. A total of 419 simple sequence repeats (SSRs) were identified, from which only 12.8% overlapped between the two sets. The position of the SSRs within the coding sequence were predicted...

  14. Making History: An Indiana Teacher Uses Technology to Feel the History

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Jon Carl's vision is simple: get students passionate about history by turning them into historians. To accomplish this, he created a class centered on documentary film-making. Students choose a topic, conduct research at local libraries, write a script, film video interviews, and create video segments of four to 15 minutes. District technology…

  15. From Tutor Scripts to Talking Sticks: 100 Ways to Differentiate Instruction in K-12 Inclusive Classrooms

    ERIC Educational Resources Information Center

    Kluth, Paula; Danaher, Sheila

    2010-01-01

    Differentiated instruction engages students of all abilities as active learners, decision-makers, and problem solvers--making educational experiences more meaningful for all. This one-of-a-kind book proves that designing differentiated instruction can be simple and fun! Packed with creative adaptation ideas like fidget bags, doodle notes, and…

  16. Performance prediction: A case study using a multi-ring KSR-1 machine

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhu, Jianping

    1995-01-01

    While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.

  17. Graph Mining Meets the Semantic Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less

  18. Echo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Dustin Yewell

    This document is a white paper marketing proposal for Echo™ is a data analysis platform designed for efficient, robust, and scalable creation and execution of complex workflows. Echo’s analysis management system refers to the ability to track, understand, and reproduce workflows used for arriving at results and decisions. Echo improves on traditional scripted data analysis in MATLAB, Python, R, and other languages to allow analysts to make better use of their time. Additionally, the Echo platform provides a powerful data management and curation solution allowing analysts to quickly find, access, and consume datasets. After two years of development and amore » first release in early 2016, Echo is now available for use with many data types in a wide range of application domains. Echo provides tools that allow users to focus on data analysis and decisions with confidence that results are reported accurately.« less

  19. Simulation for Dynamic Situation Awareness and Prediction III

    DTIC Science & Technology

    2010-03-01

    source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes

  20. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction

    NASA Astrophysics Data System (ADS)

    Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''

  1. Assembly of Terpenoid Cores by a Simple, Tunable Strategy.

    PubMed

    Lahtigui, Ouidad; Emmetiere, Fabien; Zhang, Wei; Jirmo, Liban; Toledo-Roy, Samira; Hershberger, John C; Macho, Jocelyn M; Grenning, Alexander J

    2016-12-19

    Oxygenated, polycyclic terpenoid natural products have important biological activities. Although total synthesis of such terpenes is widely studied, synthetic strategies that allow for controlled placement of oxygen atoms and other functionality remains a challenge. Herein, we present a simple, scalable, and tunable synthetic strategy to assemble terpenoid-like polycycloalkanes from cycloalkanones, malononitrile, and allylic electrophiles, abundantly available reagent classes. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Outburst floods from glacier-dammed lakes: The effect of mode of lake drainage on flood magnitude

    USGS Publications Warehouse

    Walder, J.S.; Costa, J.E.

    1996-01-01

    Published accounts of outburst floods from glacier-dammed lakes show that a significant number of such floods are associated not with drainage through a tunnel incised into the basal ice - the process generally assumed - but rather with ice-marginal drainage, mechanical failure of part of the ice dam, or both. Non-tunnel floods are strongly correlated with formation of an ice dam by a glacier advancing from a tributary drainage into either a main river valley or a pre-existing body of water (lake or fiord). For a given lake volume, non-tunnel floods tend to have significantly higher peak discharges than tunnel-drainage floods. Statistical analysis of data for floods associated with subglacial tunnels yields the following empirical relation between lake volume V and peak discharge script Q signp : script Q signp = 46V0.66 (r2 = 0.70), when script Q signp is expressed in metres per second and V in millions of cubic metres. This updates the so-called Clague-Mathews relation. For non-tunnel floods, the analogous relation is script Q signp = 1100V0.44 (r2 = 0.58). The latter relation is close to one found by Costa (1988) for failure of constructed earthen dams. This closeness is probably not coincidental but rather reflects similarities in modes of dam failure and lake drainage. We develop a simple physical model of the breach-widening process for non-tunnel floods, assuming that (1) the rate of breach widening is controlled by melting of the ice, (2) outflow from the lake is regulated by the hydraulic condition of critical flow where water enters the breach, and (3) the effect of lake temperature may be dealt with as done by Clarke (1982). Calculations based on the model simulate quite well outbursts from Lake George, Alaska. Dimensional analysis leads to two approximations of the form script Q signp ??? Vqf(hi, ??0), where q = 0.5 to 0.6, hi is initial lake depth, ??0 is lake temperature, and the form of f(hi, ??0) depends on the relative importance of viscous dissipation and the lake's thermal energy in determining the rate of breach opening. These expressions, along with the regression relations, should prove useful for assessing the probable magnitude of breach-type outburst floods.

  3. Illness script development in pre-clinical education through case-based clinical reasoning training

    PubMed Central

    Keemink, Yvette; van Dijk, Savannah; ten Cate, Olle

    2018-01-01

    Objectives To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. Methods In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students’ illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Results Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=–0.006, p=NS). Conclusions The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability. PMID:29428911

  4. Illness script development in pre-clinical education through case-based clinical reasoning training.

    PubMed

    Keemink, Yvette; Custers, Eugene J F M; van Dijk, Savannah; Ten Cate, Olle

    2018-02-09

    To assess illness script richness and maturity in preclinical students after they attended a specifically structured instructional format, i.e., a case based clinical reasoning (CBCR) course. In a within-subject experimental design, medical students who had finished the CBCR course participated in an illness script experiment. In the first session, richness and maturity of students' illness scripts for diseases discussed during the CBCR course were compared to illness script richness and maturity for similar diseases not included in the course. In the second session, diagnostic performance was tested, to test for differences between CBCR cases and non-CBCR cases. Scores on the CBCR course exam were related to both experimental outcomes. Thirty-two medical students participated. Illness script richness for CBCR diseases was almost 20% higher than for non-CBCR diseases, on average 14.47 (SD=3.25) versus 12.14 (SD=2.80), respectively (p<0.001). In addition, students provided more information on Enabling Conditions and less on Fault-related aspects of the disease. Diagnostic performance was better for the diseases discussed in the CBCR course, mean score 1.63 (SD=0.32) versus 1.15 (SD=0.29) for non-CBCR diseases (p<0.001). A significant correlation of exam results with recognition of CBCR cases was found (r=0.571, p<0.001), but not with illness script richness (r=-0.006, p=NS). The CBCR-course fosters early development of clinical reasoning skills by increasing the illness script richness and diagnostic performance of pre-clinical students. However, these results are disease-specific and therefore we cannot conclude that students develop a more general clinical reasoning ability.

  5. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    ERIC Educational Resources Information Center

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2014-01-01

    Based on a subsample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this article reports data from a follow-up assessment at age 18 years on the antecedents of "secure base script knowledge", as reflected in the ability to generate narratives in which attachment-related difficulties are…

  6. Turbulence-assisted shear exfoliation of graphene using household detergent and a kitchen blender

    NASA Astrophysics Data System (ADS)

    Varrla, Eswaraiah; Paton, Keith R.; Backes, Claudia; Harvey, Andrew; Smith, Ronan J.; McCauley, Joe; Coleman, Jonathan N.

    2014-09-01

    To facilitate progression from the lab to commercial applications, it will be necessary to develop simple, scalable methods to produce high quality graphene. Here we demonstrate the production of large quantities of defect-free graphene using a kitchen blender and household detergent. We have characterised the scaling of both graphene concentration and production rate with the mixing parameters: mixing time, initial graphite concentration, rotor speed and liquid volume. We find the production rate to be invariant with mixing time and to increase strongly with mixing volume, results which are important for scale-up. Even in this simple system, concentrations of up to 1 mg ml-1 and graphene masses of >500 mg can be achieved after a few hours mixing. The maximum production rate was ~0.15 g h-1, much higher than for standard sonication-based exfoliation methods. We demonstrate that graphene production occurs because the mean turbulent shear rate in the blender exceeds the critical shear rate for exfoliation.To facilitate progression from the lab to commercial applications, it will be necessary to develop simple, scalable methods to produce high quality graphene. Here we demonstrate the production of large quantities of defect-free graphene using a kitchen blender and household detergent. We have characterised the scaling of both graphene concentration and production rate with the mixing parameters: mixing time, initial graphite concentration, rotor speed and liquid volume. We find the production rate to be invariant with mixing time and to increase strongly with mixing volume, results which are important for scale-up. Even in this simple system, concentrations of up to 1 mg ml-1 and graphene masses of >500 mg can be achieved after a few hours mixing. The maximum production rate was ~0.15 g h-1, much higher than for standard sonication-based exfoliation methods. We demonstrate that graphene production occurs because the mean turbulent shear rate in the blender exceeds the critical shear rate for exfoliation. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03560g

  7. Meta-analyzing dependent correlations: an SPSS macro and an R script.

    PubMed

    Cheung, Shu Fai; Chan, Darius K-S

    2014-06-01

    The presence of dependent correlation is a common problem in meta-analysis. Cheung and Chan (2004, 2008) have shown that samplewise-adjusted procedures perform better than the more commonly adopted simple within-sample mean procedures. However, samplewise-adjusted procedures have rarely been applied in meta-analytic reviews, probably due to the lack of suitable ready-to-use programs. In this article, we compare the samplewise-adjusted procedures with existing procedures to handle dependent effect sizes, and present the samplewise-adjusted procedures in a way that will make them more accessible to researchers conducting meta-analysis. We also introduce two tools, an SPSS macro and an R script, that researchers can apply to their meta-analyses; these tools are compatible with existing meta-analysis software packages.

  8. Data Curation and Visualization for MuSIASEM Analysis of the Nexus

    NASA Astrophysics Data System (ADS)

    Renner, Ansel

    2017-04-01

    A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.

  9. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    PubMed

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture.

    PubMed

    Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés

    2016-11-23

    Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors' knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture.

  11. A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture

    PubMed Central

    Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés

    2016-01-01

    Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors’ knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture. PMID:27886091

  12. Diameter and Geometry Control of Vertically Aligned SWNTs through Catalyst Manipulation

    NASA Astrophysics Data System (ADS)

    Xiang, Rong; Einarsson, Erik; Okawa, Jun; Murakami, Yoichi; Maruyama, Shigeo

    2009-03-01

    We present our recent progress on manipulating our liquid-based catalyst loading process, which possesses greater potential than conventional deposition in terms of cost and scalability, to control the diameter and morphology of single-walled carbon nanotubes (SWNTs). We demonstrate that the diameter of aligned SWNTs synthesized by alcohol catalytic CVD can be tailored over a wide range by modifying the catalyst recipe. SWNT arrays with an average diameter as small as 1.2 nm were obtained by this method. Additionally, owing to the alignment of the array, the continuous change of the SWNT diameter during a single CVD process can be clearly observed and quantitatively characterized. We have also developed a versatile wet chemistry method to localize the growth of SWNTs to desired regions via surface modification. By functionalizing the silicon surface using a classic self-assembled monolayer, the catalyst can be selectively dip-coated onto hydrophilic areas of the substrate. This technique was successful in producing both random and aligned SWNTs with various patterns. The precise control of the diameter and morphology of SWNTs, achieved by simple and scalable liquid-based surface chemistry, could greatly facilitate the application of SWNTs as the building blocks of future nano-devices.

  13. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  14. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  15. Recent advances in the Lesser Antilles observatories Part 2 : WebObs - an integrated web-based system for monitoring and networks management

    NASA Astrophysics Data System (ADS)

    Beauducel, François; Bosson, Alexis; Randriamora, Frédéric; Anténor-Habazac, Christian; Lemarchand, Arnaud; Saurel, Jean-Marie; Nercessian, Alexandre; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Seismological and Volcanological observatories have common needs and often common practical problems for multi disciplinary data monitoring applications. In fact, access to integrated data in real-time and estimation of measurements uncertainties are keys for an efficient interpretation, but instruments variety, heterogeneity of data sampling and acquisition systems lead to difficulties that may hinder crisis management. In Guadeloupe observatory, we have developed in the last years an operational system that attempts to answer the questions in the context of a pluri-instrumental observatory. Based on a single computer server, open source scripts (Matlab, Perl, Bash, Nagios) and a Web interface, the system proposes: an extended database for networks management, stations and sensors (maps, station file with log history, technical characteristics, meta-data, photos and associated documents); a web-form interfaces for manual data input/editing and export (like geochemical analysis, some of the deformation measurements, ...); routine data processing with dedicated automatic scripts for each technique, production of validated data outputs, static graphs on preset moving time intervals, and possible e-mail alarms; computers, acquisition processes, stations and individual sensors status automatic check with simple criteria (files update and signal quality), displayed as synthetic pages for technical control. In the special case of seismology, WebObs includes a digital stripchart multichannel continuous seismogram associated with EarthWorm acquisition chain (see companion paper Part 1), event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps accessed through a user request form. This system leads to a real-time Internet access for integrated monitoring and becomes a strong support for scientists and technicians exchange, and is widely open to interdisciplinary real-time modeling. It has been set up at Martinique observatory and installation is planned this year at Montserrat Volcanological Observatory. It also in production at the geomagnetic observatory of Addis Abeba in Ethiopia.

  16. Scalable High-Performance Ultraminiature Graphene Micro-Supercapacitors by a Hybrid Technique Combining Direct Writing and Controllable Microdroplet Transfer.

    PubMed

    Shen, Daozhi; Zou, Guisheng; Liu, Lei; Zhao, Wenzheng; Wu, Aiping; Duley, Walter W; Zhou, Y Norman

    2018-02-14

    Miniaturization of energy storage devices can significantly decrease the overall size of electronic systems. However, this miniaturization is limited by the reduction of electrode dimensions and the reproducible transfer of small electrolyte drops. This paper reports first a simple scalable direct writing method for the production of ultraminiature microsupercapacitor (MSC) electrodes, based on femtosecond laser reduced graphene oxide (fsrGO) interlaced pads. These pads, separated by 2 μm spacing, are 100 μm long and 8 μm wide. A second stage involves the accurate transfer of an electrolyte microdroplet on top of each individual electrode, which can avoid any interference of the electrolyte with other electronic components. Abundant in-plane mesopores in fsrGO induced by a fs laser together with ultrashort interelectrode spacing enables MSCs to exhibit a high specific capacitance (6.3 mF cm -2 and 105 F cm -3 ) and ∼100% retention after 1000 cycles. An all graphene resistor-capacitor (RC) filter is also constructed by combining the MSC and a fsrGO resistor, which is confirmed to exhibit highly enhanced performance characteristics. This new hybrid technique combining fs laser direct writing and precise microdroplet transfer easily enables scalable production of ultraminiature MSCs, which is believed to be significant for practical application of micro-supercapacitor microelectronic systems.

  17. Mg(OH)2 nanoparticles produced at room temperature by an innovative, facile, and scalable synthesis route

    NASA Astrophysics Data System (ADS)

    Taglieri, Giuliana; Felice, Benito; Daniele, Valeria; Ferrante, Fabiola

    2015-10-01

    Nanoparticles form the fundamental building blocks for many exciting applications in various scientific disciplines. However, the problem of the large-scale synthesis of nanoparticles remains challenging. An original, eco-friendly, single step, and scalable method to produce magnesium hydroxide nanoparticles in aqueous suspensions is here presented. The method, based on an exchange ion process, is extremely simple and rapid (few minutes). It employs cheap or renewable reactants, operates at room temperature and does not require intermediate steps (washings/purifications) to eliminate undesired compounds. Moreover, it is possible to regenerate the exchange material and to reuse it for new operation of synthesis, according to a cyclic procedure, providing potential aptitudes of scalability of nanoparticles production. Some of the synthesis parameters are varied, and structural and morphological features of the produced nanoparticles, after few seconds from the beginning of the synthesis up to the ending time, are investigated by means of several techniques, such as X-ray diffraction (profile fitting and Rietveld refinement), transmission electron microscopy, infrared spectroscopy, thermal analyses, and surface area measurements. In any case, pure and stable suspensions are produced, characterized by crystalline and mesoporous Mg(OH)2 nanoparticles, with lamellar morphology. In particular, the nanolamellas appeared constituted by a superimposition of hexagonally plated and crystalline nanosized precursors (2-3 nm in dimensions), crystallographically oriented.

  18. All-solid-state micro-supercapacitors based on inkjet printed graphene electrodes

    NASA Astrophysics Data System (ADS)

    Li, Jiantong; Mishukova, Viktoriia; Östling, Mikael

    2016-09-01

    The all-solid-state graphene-based in-plane micro-supercapacitors are fabricated simply through reliable inkjet printing of pristine graphene in interdigitated structure on silicon wafers to serve as both electrodes and current collectors, and a following drop casting of polymer electrolytes (polyvinyl alcohol/H3PO4). Benefiting from the printing processing, an attractive porous electrode microstructure with a large number of vertically orientated graphene flakes is observed. The devices exhibit commendable areal capacitance over 0.1 mF/cm2 and a long cycle life of over 1000 times. The simple and scalable fabrication technique for efficient micro-supercapacitors is promising for on-chip energy storage applications in emerging electronics.

  19. Progress and prospects of GaN-based VCSEL from near UV to green emission

    NASA Astrophysics Data System (ADS)

    Yu, Hsin-chieh; Zheng, Zhi-wei; Mei, Yang; Xu, Rong-bin; Liu, Jian-ping; Yang, Hui; Zhang, Bao-ping; Lu, Tien-chang; Kuo, Hao-chung

    2018-01-01

    GaN is a great material for making optoelectronic devices in the blue, blue-violet and green bands. Vertical-cavity surface-emitting lasers (VCSELs) have many advantages including small footprint, circular symmetry of output beam, two-dimensional scalability and/or addressability, surface-mount packaging, good price-performance ratio, and simple optics/alignment for output coupling. In this paper, we would like to (1) Review the design and fabrication of GaN-based VCSELs including some technology challenges, (2) Discuss the design and metalorganic chemical vapor deposition (MOCVD) growth of electrically pumped blue VCSELs and (3) Demonstrate world first green VCSEL using quantum dots (QDs) active region to overcome the 'green gap'.

  20. Scalable high-precision tuning of photonic resonators by resonant cavity-enhanced photoelectrochemical etching

    PubMed Central

    Gil-Santos, Eduardo; Baker, Christopher; Lemaître, Aristide; Gomez, Carmen; Leo, Giuseppe; Favero, Ivan

    2017-01-01

    Photonic lattices of mutually interacting indistinguishable cavities represent a cornerstone of collective phenomena in optics and could become important in advanced sensing or communication devices. The disorder induced by fabrication technologies has so far hindered the development of such resonant cavity architectures, while post-fabrication tuning methods have been limited by complexity and poor scalability. Here we present a new simple and scalable tuning method for ensembles of microphotonic and nanophotonic resonators, which enables their permanent collective spectral alignment. The method introduces an approach of cavity-enhanced photoelectrochemical etching in a fluid, a resonant process triggered by sub-bandgap light that allows for high selectivity and precision. The technique is presented on a gallium arsenide nanophotonic platform and illustrated by finely tuning one, two and up to five resonators. It opens the way to applications requiring large networks of identical resonators and their spectral referencing to external etalons. PMID:28117394

  1. Effects of script-based role play in cardiopulmonary resuscitation team training.

    PubMed

    Chung, Sung Phil; Cho, Junho; Park, Yoo Seok; Kang, Hyung Goo; Kim, Chan Woong; Song, Keun Jeong; Lim, Hoon; Cho, Gyu Chong

    2011-08-01

    The purpose of this study is to compare the cardiopulmonary resuscitation (CPR) team dynamics and performance between a conventional simulation training group and a script-based training group. This was a prospective randomised controlled trial of educational intervention for CPR team training. Fourteen teams, each consisting of five members, were recruited. The conventional group (C) received training using a didactic lecture and simulation with debriefing, while the script group (S) received training using a resuscitation script. The team activity was evaluated with checklists both before and after 1 week of training. The videotaped simulated resuscitation events were compared in terms of team dynamics and performance aspects. Both groups showed significantly higher leadership scores after training (C: 58.2 ± 9.2 vs. 67.2 ± 9.5, p=0.007; S: 57.9 ± 8.1 vs. 65.4 ± 12.1, p=0.034). However, there were no significant improvements in performance scores in either group after training. There were no differences in the score improvement after training between the two groups in dynamics (C: 9.1 ± 12.6 vs. S: 7.4 ± 13.7, p=0.715), performance (C: 5.5 ± 11.4 vs. S: 4.7 ± 9.6, p=0.838) and total scores (C: 14.6 ± 20.1 vs. S: 12.2 ± 19.5, p=0.726). Script-based CPR team training resulted in comparable improvements in team dynamics scores compared with conventional simulation training. Resuscitation scripts may be used as an adjunct for CPR team training.

  2. Adults' Autonomic and Subjective Emotional Responses to Infant Vocalizations: The Role of Secure Base Script Knowledge

    ERIC Educational Resources Information Center

    Groh, Ashley M.; Roisman, Glenn I.

    2009-01-01

    This article examines the extent to which secure base script knowledge--as reflected in an adult's ability to generate narratives in which attachment-related threats are recognized, competent help is provided, and the problem is resolved--is associated with adults' autonomic and subjective emotional responses to infant distress and nondistress…

  3. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay

    1987-01-01

    The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.

  4. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less

  5. Modules based on the geochemical model PHREEQC for use in scripting and programming languages

    USGS Publications Warehouse

    Charlton, Scott R.; Parkhurst, David L.

    2011-01-01

    The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server—for example, Excel®, Visual Basic®, Python, or MATLAB". PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations.

  6. Modules based on the geochemical model PHREEQC for use in scripting and programming languages

    USGS Publications Warehouse

    Charlton, S.R.; Parkhurst, D.L.

    2011-01-01

    The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server-for example, Excel??, Visual Basic??, Python, or MATLAB??. PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations. ?? 2011.

  7. Isca, v1.0: a framework for the global modelling of the atmospheres of Earth and other planets at varying levels of complexity

    NASA Astrophysics Data System (ADS)

    Vallis, Geoffrey K.; Colyer, Greg; Geen, Ruth; Gerber, Edwin; Jucker, Martin; Maher, Penelope; Paterson, Alexander; Pietschnig, Marianne; Penn, James; Thomson, Stephen I.

    2018-03-01

    Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.

  8. Combined didactic and scenario-based education improves the ability of intensive care unit staff to recognize delirium at the bedside

    PubMed Central

    Devlin, John W; Marquis, Francois; Riker, Richard R; Robbins, Tracey; Garpestad, Erik; Fong, Jeffrey J; Didomenico, Dorothy; Skrobik, Yoanna

    2008-01-01

    Background While nurses play a key role in identifying delirium, several authors have noted variability in their ability to recognize delirium. We sought to measure the impact of a simple educational intervention on the ability of intensive care unit (ICU) nurses to clinically identify delirium and to use a standardized delirium scale correctly. Methods Fifty ICU nurses from two different hospitals (university medical and community teaching) evaluated an ICU patient for pain, level of sedation and presence of delirium before and after an educational intervention. The same patient was concomitantly, but independently, evaluated by a validated judge (ρ = 0.98) who acted as the reference standard in all cases. The education consisted of two script concordance case scenarios, a slide presentation regarding scale-based delirium assessment, and two further cases. Results Nurses' clinical recognition of delirium was poor in the before-education period as only 24% of nurses reported the presence or absence of delirium and only 16% were correct compared with the judge. After education, the number of nurses able to evaluate delirium using any scale (12% vs 82%, P < 0.0005) and use it correctly (8% vs 62%, P < 0.0005) increased significantly. While judge-nurse agreement (Spearman ρ) for the presence of delirium was relatively high for both the before-education period (r = 0.74, P = 0.262) and after-education period (r = 0.71, P < 0.0005), the low number of nurses evaluating delirium before education lead to statistical significance only after education. Education did not alter nurses' self-reported evaluation of delirium (before 76% vs after 100%, P = 0.125). Conclusion A simple composite educational intervention incorporating script concordance theory improves the capacity for ICU nurses to screen for delirium nearly as well as experts. Self-reporting by nurses of completion of delirium screening may not constitute an adequate quality assurance process. PMID:18291021

  9. Combined didactic and scenario-based education improves the ability of intensive care unit staff to recognize delirium at the bedside.

    PubMed

    Devlin, John W; Marquis, Francois; Riker, Richard R; Robbins, Tracey; Garpestad, Erik; Fong, Jeffrey J; Didomenico, Dorothy; Skrobik, Yoanna

    2008-01-01

    While nurses play a key role in identifying delirium, several authors have noted variability in their ability to recognize delirium. We sought to measure the impact of a simple educational intervention on the ability of intensive care unit (ICU) nurses to clinically identify delirium and to use a standardized delirium scale correctly. Fifty ICU nurses from two different hospitals (university medical and community teaching) evaluated an ICU patient for pain, level of sedation and presence of delirium before and after an educational intervention. The same patient was concomitantly, but independently, evaluated by a validated judge (rho = 0.98) who acted as the reference standard in all cases. The education consisted of two script concordance case scenarios, a slide presentation regarding scale-based delirium assessment, and two further cases. Nurses' clinical recognition of delirium was poor in the before-education period as only 24% of nurses reported the presence or absence of delirium and only 16% were correct compared with the judge. After education, the number of nurses able to evaluate delirium using any scale (12% vs 82%, P < 0.0005) and use it correctly (8% vs 62%, P < 0.0005) increased significantly. While judge-nurse agreement (Spearman rho) for the presence of delirium was relatively high for both the before-education period (r = 0.74, P = 0.262) and after-education period (r = 0.71, P < 0.0005), the low number of nurses evaluating delirium before education lead to statistical significance only after education. Education did not alter nurses' self-reported evaluation of delirium (before 76% vs after 100%, P = 0.125). A simple composite educational intervention incorporating script concordance theory improves the capacity for ICU nurses to screen for delirium nearly as well as experts. Self-reporting by nurses of completion of delirium screening may not constitute an adequate quality assurance process.

  10. Managing an archive of weather satellite images

    NASA Technical Reports Server (NTRS)

    Seaman, R. L.

    1992-01-01

    The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.

  11. Design of 3D simulation engine for oilfield safety training

    NASA Astrophysics Data System (ADS)

    Li, Hua-Ming; Kang, Bao-Sheng

    2015-03-01

    Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.

  12. Secure Base Script and Psychological Dysfunction in Japanese Young Adults in the 21st Century: Using the Attachment Script Assessment

    ERIC Educational Resources Information Center

    Umemura, Tomotaka; Watanabe, Manami; Tazuke, Kohei; Asada-Hirano, Shintaro; Kudo, Shimpei

    2018-01-01

    The universality of secure base construct, which suggests that one's use of an attachment figure as a secure base from which to explore the environment is an evolutionary outcome, is one of the core ideas of attachment theory. However, this universality idea has been critiqued because exploration is not as valued in Japanese culture as it is in…

  13. External access to ALICE controls conditions data

    NASA Astrophysics Data System (ADS)

    Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.

    2014-06-01

    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.

  14. A SQL-Database Based Meta-CASE System and its Query Subsystem

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

  15. Architecture, Design, and Development of an HTML/JavaScript Web-Based Group Support System.

    ERIC Educational Resources Information Center

    Romano, Nicholas C., Jr.; Nunamaker, Jay F., Jr.; Briggs, Robert O.; Vogel, Douglas R.

    1998-01-01

    Examines the need for virtual workspaces and describes the architecture, design, and development of GroupSystems for the World Wide Web (GSWeb), an HTML/JavaScript Web-based Group Support System (GSS). GSWeb, an application interface similar to a Graphical User Interface (GUI), is currently used by teams around the world and relies on user…

  16. Program Instrumentation and Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2002-01-01

    Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.

  17. The impact of marital withdrawal and secure base script knowledge on mothers' and fathers' parenting.

    PubMed

    Trumbell, Jill M; Hibel, Leah C; Mercado, Evelyn; Posada, Germán

    2018-06-21

    The current study examines associations between marital conflict and negative parenting behaviors among fathers and mothers, and the extent to which internal working models (IWMs) of attachment relationships may serve as sources of risk or resilience during family interactions. The sample consisted of 115 families (mothers, fathers, and their 6-month-old infants) who participated in a controlled experiment. Couples were randomly assigned to engage in either a conflict or positive marital discussion, followed by parent-infant freeplay sessions and assessment of parental IWMs of attachment (i.e., secure base script knowledge). While no differences in parenting behaviors emerged between the conflict and positive groups, findings revealed that couple withdrawal during the marital discussion was related to more intrusive and emotionally disengaged parenting for mothers and fathers. Interestingly, secure base script knowledge was inversely related to intrusion and emotional disengagement for fathers, but not for mothers. Furthermore, only among fathers did secure base script knowledge serve to significantly buffer the impact of marital disengagement on negative parenting (emotional disengagement). Findings are discussed using a family systems framework and expand our understanding of families, and family members, at risk. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-01-16

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less

  19. Modernizing Earth and Space Science Modeling Workflows in the Big Data Era

    NASA Astrophysics Data System (ADS)

    Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.

    2017-12-01

    Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.

  20. The scalable implementation of quantum walks using classical light

    NASA Astrophysics Data System (ADS)

    Goyal, Sandeep K.; Roux, F. S.; Forbes, Andrew; Konrad, Thomas

    2014-02-01

    A quantum walk is the quantum analog of the classical random walks. Despite their simple structure they form a universal platform to implement any algorithm of quantum computation. However, it is very hard to realize quantum walks with a sufficient number of iterations in quantum systems due to their sensitivity to environmental influences and subsequent loss of coherence. Here we present a scalable implementation scheme for one-dimensional quantum walks for arbitrary number of steps using the orbital angular momentum modes of classical light beams. Furthermore, we show that using the same setup with a minor adjustment we can also realize electric quantum walks.

  1. A simple approach to hybrid inorganic–organic step-growth hydrogels with scalable control of physicochemical properties and biodegradability† †Electronic supplementary information (ESI) available: Experimental details and characterization data as mentioned in the text. See DOI: 10.1039/c4py01789g Click here for additional data file.

    PubMed Central

    Alves, F.

    2015-01-01

    We prepared new and scalable, hybrid inorganic–organic step-growth hydrogels with polyhedral oligomeric silsesquioxane (POSS) network knot construction elements and hydrolytically degradable poly(ethylene glycol) (PEG) di-ester macromonomers by in situ radical-mediated thiol–ene photopolymerization. The physicochemical properties of the gels are fine-tailored over orders of magnitude including functionalization of their interior, a hierarchical gel structure, and biodegradability. PMID:25821524

  2. A New Metre for Cheap, Quick, Reliable and Simple Thermal Transmittance (U-Value) Measurements in Buildings.

    PubMed

    Andújar Márquez, José Manuel; Martínez Bohórquez, Miguel Ángel; Gómez Melgar, Sergio

    2017-09-03

    This paper deals with the thermal transmittance measurement focused on buildings and specifically in building energy retrofitting. Today, if many thermal transmittance measurements in a short time are needed, the current devices, based on the measurement of the heat flow through the wall, cannot carry out them, except if a great amount of devices are used at once along with intensive and tedious post-processing and analysis work. In this paper, from well-known physical laws, authors develop a methodology based on three temperatures measurements, which is implemented by a novel thermal transmittance metre. The paper shows its development step by step. As a result the developed device is modular, scalable, and fully wireless; it is capable of taking as many measurements at once as user needs. The developed system is compared working together on a same test to the currently used one based on heat flow. The results show that the developed metre allows carrying out thermal transmittance measurements in buildings in a cheap, quick, reliable and simple way.

  3. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    PubMed

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  4. Acquisition and Maintenance of Scripts in Aphasia: A Comparison of Two Cuing Conditions

    PubMed Central

    Cherney, Leora R.; Kaye, Rosalind C.; van Vuuren, Sarel

    2014-01-01

    Purpose This study was designed to compare acquisition and maintenance of scripts under two conditions: High Cue which provided numerous multimodality cues designed to minimize errors, and Low Cue which provided minimal cues. Methods In a randomized controlled cross-over study, eight individuals with chronic aphasia received intensive computer-based script training under two cuing conditions. Each condition lasted three weeks, with a three-week washout period. Trained and untrained scripts were probed for accuracy and rate at baseline, during treatment, immediately post-treatment, and at three and six weeks post-treatment. Significance testing was conducted on gain scores and effect sizes were calculated. Results Training resulted in significant gains in script acquisition with maintenance of skills at three and six weeks post-treatment. Differences between cuing conditions were not significant. When severity of aphasia was considered, there also were no significant differences between conditions, although magnitude of change was greater in the High Cue condition versus the Low Cue condition for those with more severe aphasia. Conclusions Both cuing conditions were effective in acquisition and maintenance of scripts. The High Cue condition may be advantageous for those with more severe aphasia. Findings support the clinical use of script training and importance of considering aphasia severity. PMID:24686911

  5. Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.

    PubMed

    Bhattacharya, Ujjwal; Chaudhuri, B B

    2009-03-01

    This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.

  6. An Idealized, Single Radial Swirler, Lean-Direct-Injection (LDI) Concept Meshing Script

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony C.; Thompson, Daniel

    2008-01-01

    To easily study combustor design parameters using computational fluid dynamics codes (CFD), a Gridgen Glyph-based macro (based on the Tcl scripting language) dubbed BladeMaker has been developed for the meshing of an idealized, single radial swirler, lean-direct-injection (LDI) combustor. BladeMaker is capable of taking in a number of parameters, such as blade width, blade tilt with respect to the perpendicular, swirler cup radius, and grid densities, and producing a three-dimensional meshed radial swirler with a can-annular (canned) combustor. This complex script produces a data format suitable for but not specific to the National Combustion Code (NCC), a state-of-the-art CFD code developed for reacting flow processes.

  7. Efficient etching-free transfer of high quality, large-area CVD grown graphene onto polyvinyl alcohol films

    NASA Astrophysics Data System (ADS)

    Marta, Bogdan; Leordean, Cosmin; Istvan, Todor; Botiz, Ioan; Astilean, Simion

    2016-02-01

    Graphene transfer is a procedure of paramount importance for the production of graphene-based electronic devices. The transfer procedure can affect the electronic properties of the transferred graphene and can be detrimental for possible applications both due to procedure induced defects which can appear and due to scalability of the method. Hence, it is important to investigate new transfer methods for graphene that are less time consuming and show great promise. In the present study we propose an efficient, etching-free transfer method that consists in applying a thin polyvinyl alcohol layer on top of the CVD grown graphene on Cu and then peeling-off the graphene onto the polyvinyl alcohol film. We investigate the quality of the transferred graphene before and after the transfer, using Raman spectroscopy and imaging as well as optical and atomic force microscopy techniques. This simple transfer method is scalable and can lead to complete transfer of graphene onto flexible and transparent polymer support films without affecting the quality of the graphene during the transfer procedure.

  8. Rapid electrostatics-assisted layer-by-layer assembly of near-infrared-active colloidal photonic crystals.

    PubMed

    Askar, Khalid; Leo, Sin-Yen; Xu, Can; Liu, Danielle; Jiang, Peng

    2016-11-15

    Here we report a rapid and scalable bottom-up technique for layer-by-layer (LBL) assembling near-infrared-active colloidal photonic crystals consisting of large (⩾1μm) silica microspheres. By combining a new electrostatics-assisted colloidal transferring approach with spontaneous colloidal crystallization at an air/water interface, we have demonstrated that the crystal transfer speed of traditional Langmuir-Blodgett-based colloidal assembly technologies can be enhanced by nearly 2 orders of magnitude. Importantly, the crystalline quality of the resultant photonic crystals is not compromised by this rapid colloidal assembly approach. They exhibit thickness-dependent near-infrared stop bands and well-defined Fabry-Perot fringes in the specular transmission and reflection spectra, which match well with the theoretical calculations using a scalar-wave approximation model and Fabry-Perot analysis. This simple yet scalable bottom-up technology can significantly improve the throughput in assembling large-area, multilayer colloidal crystals, which are of great technological importance in a variety of optical and non-optical applications ranging from all-optical integrated circuits to tissue engineering. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Participatory monitoring to connect local and global priorities for forest restoration.

    PubMed

    Evans, Kristen; Guariguata, Manuel R; Brancalion, Pedro H S

    2018-06-01

    New global initiatives to restore forest landscapes present an unparalleled opportunity to reverse deforestation and forest degradation. Participatory monitoring could play a crucial role in providing accountability, generating local buy in, and catalyzing learning in monitoring systems that need scalability and adaptability to a range of local sites. We synthesized current knowledge from literature searches and interviews to provide lessons for the development of a scalable, multisite participatory monitoring system. Studies show that local people can collect accurate data on forest change, drivers of change, threats to reforestation, and biophysical and socioeconomic impacts that remote sensing cannot. They can do this at one-third the cost of professionals. Successful participatory monitoring systems collect information on a few simple indicators, respond to local priorities, provide appropriate incentives for participation, and catalyze learning and decision making based on frequent analyses and multilevel interactions with other stakeholders. Participatory monitoring could provide a framework for linking global, national, and local needs, aspirations, and capacities for forest restoration. © 2018 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  10. Foam separation of Rhodamine-G and Evans Blue using a simple separatory bottle system.

    PubMed

    Dasarathy, Dhweeja; Ito, Yoichiro

    2017-09-29

    A simple separatory glass bottle was used to improve separation effectiveness and cost efficiency while simultaneously creating a simpler system for separating biological compounds. Additionally, it was important to develop a scalable separation method so this would be applicable to both analytical and preparative separations. Compared to conventional foam separation methods, this method easily forms stable dry foam which ensures high purity of yielded fractions. A negatively charged surfactant, sodium dodecyl sulfate (SDS), was used as the ligand to carry a positively charged Rhodamine-G, leaving a negatively charged Evans Blue in the bottle. The performance of the separatory bottle was tested for separating Rhodamine-G from Evans Blue with sample sizes ranged from 1 to 12mg in preparative separations and 1-20μg in analytical separations under optimum conditions. These conditions including N 2 gas pressure, spinning speed of contents with a magnetic stirrer, concentration of the ligand, volume of the solvent, and concentration of the sample, were all modified and optimized. Based on the calculations at their peak absorbances, Rhodamine-G and Evans Blue were efficiently separated in times ranging from 1h to 3h, depending on sample volume. Optimal conditions were found to be 60psi N 2 pressure and 2mM SDS for the affinity ligand. This novel separation method will allow for rapid separation of biological compounds while simultaneously being scalable and cost effective. Published by Elsevier B.V.

  11. PVEX: An expert system for producibility/value engineering

    NASA Technical Reports Server (NTRS)

    Lam, Chun S.; Moseley, Warren

    1991-01-01

    PVEX is described as an expert system that solves the problem of selection of the material and process in missile manufacturing. The producibility and the value problem has been deeply studied in the past years, and was written in dBase III and PROLOG before. A new approach is presented in that the solution is achieved by introducing hypothetical reasoning, heuristic criteria integrated with a simple hypertext system and shell programming. PVEX combines KMS with Unix scripts which graphically depicts decision trees. The decision trees convey high level qualitative problem solving knowledge to users, and a stand-alone help facility and technical documentation is available through KMS. The system developed is considerably less development costly than any other comparable expert system.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastian, Mark; Trigueros, Jose V.

    Phoenix is a Java Virtual Machine (JVM) based library for performing mathematical and astrodynamics calculations. It consists of two primary sub-modules, phoenix-math and phoenix-astrodynamics. The mathematics package has a variety of mathematical classes for performing 3D transformations, geometric reasoning, and numerical analysis. The astrodynamics package has various classes and methods for computing locations, attitudes, accesses, and other values useful for general satellite modeling and simulation. Methods for computing celestial locations, such as the location of the Sun and Moon, are also included. Phoenix is meant to be used as a library within the context of a larger application. For example,more » it could be used for a web service, desktop client, or to compute simple values in a scripting environment.« less

  13. The effect of written text on comprehension of spoken English as a foreign language.

    PubMed

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  14. Texture for script identification.

    PubMed

    Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha

    2005-11-01

    The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.

  15. The secure base script and the task of caring for elderly parents: implications for attachment theory and clinical practice.

    PubMed

    Chen, Cory K; Waters, Harriet Salatas; Hartman, Marilyn; Zimmerman, Sheryl; Miklowitz, David J; Waters, Everett

    2013-01-01

    This study explores links between adults' attachment representations and the task of caring for elderly parents with dementia. Participants were 87 adults serving as primary caregivers of a parent or parent-in-law with dementia. Waters and Waters' ( 2006 ) Attachment Script Assessment was adapted to assess script-like attachment representation in the context of caring for their elderly parent. The quality of adult-elderly parent interactions was assessed using the Level of Expressed Emotions Scale (Cole & Kazarian, 1988 ) and self-report measures of caregivers' perception of caregiving as difficult. Caregivers' secure base script knowledge predicted lower levels of negative expressed emotion. This effect was moderated by the extent to which participants experienced caring for elderly parents as difficult. Attachment representations played a greater role in caregiving when caregiving tasks were perceived as more difficult. These results support the hypothesis that attachment representations influence the quality of care that adults provide their elderly parents. Clinical implications are discussed.

  16. galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.

    PubMed

    Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M

    2004-06-12

    The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se

  17. Electrospun ultra-fine cellulose acetate fibrous mats containing tannic acid-Fe+++ complexes

    USDA-ARS?s Scientific Manuscript database

    Cellulose acetate (CA) fibrous mats with improved mechanical and antioxidant properties were produced by a simple, scalable and cost-effective electrospinning method. Fibers loaded with small amounts of TA-Fe+++ complexes showed an increase in tensile strength of approximately 117% when compared to ...

  18. Lithography-based fabrication of nanopore arrays in freestanding SiN and graphene membranes

    NASA Astrophysics Data System (ADS)

    Verschueren, Daniel V.; Yang, Wayne; Dekker, Cees

    2018-04-01

    We report a simple and scalable technique for the fabrication of nanopore arrays on freestanding SiN and graphene membranes based on electron-beam lithography and reactive ion etching. By controlling the dose of the single-shot electron-beam exposure, circular nanopores of any size down to 16 nm in diameter can be fabricated in both materials at high accuracy and precision. We demonstrate the sensing capabilities of these nanopores by translocating dsDNA through pores fabricated using this method, and find signal-to-noise characteristics on par with transmission-electron-microscope-drilled nanopores. This versatile lithography-based approach allows for the high-throughput manufacturing of nanopores and can in principle be used on any substrate, in particular membranes made out of transferable two-dimensional materials.

  19. Density-controlled, solution-based growth of ZnO nanorod arrays via layer-by-layer polymer thin films for enhanced field emission

    NASA Astrophysics Data System (ADS)

    Weintraub, Benjamin; Chang, Sehoon; Singamaneni, Srikanth; Han, Won Hee; Choi, Young Jin; Bae, Joonho; Kirkham, Melanie; Tsukruk, Vladimir V.; Deng, Yulin

    2008-10-01

    A simple, scalable, and cost-effective technique for controlling the growth density of ZnO nanorod arrays based on a layer-by-layer polyelectrolyte polymer film is demonstrated. The ZnO nanorods were synthesized using a low temperature (T = 90 °C), solution-based method. The density-control technique utilizes a polymer thin film pre-coated on the substrate to control the mass transport of the reactant to the substrate. The density-controlled arrays were investigated as potential field emission candidates. The field emission results revealed that an emitter density of 7 nanorods µm-2 and a tapered nanorod morphology generated a high field enhancement factor of 5884. This novel technique shows promise for applications in flat panel display technology.

  20. Online versus offline: The Web as a medium for response time data collection.

    PubMed

    Chetverikov, Andrey; Upravitelev, Philipp

    2016-09-01

    The Internet provides a convenient environment for data collection in psychology. Modern Web programming languages, such as JavaScript or Flash (ActionScript), facilitate complex experiments without the necessity of experimenter presence. Yet there is always a question of how much noise is added due to the differences between the setups used by participants and whether it is compensated for by increased ecological validity and larger sample sizes. This is especially a problem for experiments that measure response times (RTs), because they are more sensitive (and hence more susceptible to noise) than, for example, choices per se. We used a simple visual search task with different set sizes to compare laboratory performance with Web performance. The results suggest that although the locations (means) of RT distributions are different, other distribution parameters are not. Furthermore, the effect of experiment setting does not depend on set size, suggesting that task difficulty is not important in the choice of a data collection method. We also collected an additional online sample to investigate the effects of hardware and software diversity on the accuracy of RT data. We found that the high diversity of browsers, operating systems, and CPU performance may have a detrimental effect, though it can partly be compensated for by increased sample sizes and trial numbers. In sum, the findings show that Web-based experiments are an acceptable source of RT data, comparable to a common keyboard-based setup in the laboratory.

  1. Effectiveness of the Smoking Cessation and Reduction in Pregnancy Treatment (SCRIPT) dissemination project: a science to prenatal care practice partnership.

    PubMed

    Windsor, Richard; Clark, Jeannie; Cleary, Sean; Davis, Amanda; Thorn, Stephanie; Abroms, Lorien; Wedeles, John

    2014-01-01

    This study evaluated the effectiveness of the Smoking Cessation and Reduction in Pregnancy Treatment (SCRIPT) Program selected by the West Virginia-Right From The Start Project for state-wide dissemination. A process evaluation documented the fidelity of SCRIPT delivery by Designated Care Coordinators (DCC), licensed nurses and social workers who provide home-based case management to Medicaid-eligible clients in all 55 counties. We implemented a quasi-experimental, non-randomized, matched Comparison (C) Group design. The SCRIPT Experimental E Group (N = 259) were all clients in 2009-2010 that wanted to quit, provided a screening carbon monoxide (CO), and received a SCRIPT home visit. The (C) Group was derived from all clients in 2006-2007 who had the same CO assessments as E Group clients and reported receiving cessation counseling. We stratified the baseline CO of E Group clients into 10 strata, and randomly selected the same number of (C) Group clients (N = 259) from each matched strata to evaluate the effectiveness of the SCRIPT Program. There were no significant baseline differences in the E and (C) Group. A Process Evaluation documented a significant increase in the fidelity of DCC delivery of SCRIPT Program procedures: from 63 % in 2006 to 74 % in 2010. Significant increases were documented in the E Group cessation rate (+9.3 %) and significant reduction rate (+4.5 %), a ≥50 % reduction from a baseline CO. Perinatal health case management staff can deliver the SCRIPT Program, and Medicaid-supported clients can change smoking behavior, even very late in pregnancy. When multiple biases were analyzed, we concluded the SCRIPT Dissemination Project was the most plausible reason for the significant changes in behavior.

  2. Automation of radiation treatment planning : Evaluation of head and neck cancer patient plans created by the Pinnacle3 scripting and Auto-Planning functions.

    PubMed

    Speer, Stefan; Klein, Andreas; Kober, Lukas; Weiss, Alexander; Yohannes, Indra; Bert, Christoph

    2017-08-01

    Intensity-modulated radiotherapy (IMRT) techniques are now standard practice. IMRT or volumetric-modulated arc therapy (VMAT) allow treatment of the tumor while simultaneously sparing organs at risk. Nevertheless, treatment plan quality still depends on the physicist's individual skills, experiences, and personal preferences. It would therefore be advantageous to automate the planning process. This possibility is offered by the Pinnacle 3 treatment planning system (Philips Healthcare, Hamburg, Germany) via its scripting language or Auto-Planning (AP) module. AP module results were compared to in-house scripts and manually optimized treatment plans for standard head and neck cancer plans. Multiple treatment parameters were scored to judge plan quality (100 points = optimum plan). Patients were initially planned manually by different physicists and re-planned using scripts or AP. Script-based head and neck plans achieved a mean of 67.0 points and were, on average, superior to manually created (59.1 points) and AP plans (62.3 points). Moreover, they are characterized by reproducibility and lower standard deviation of treatment parameters. Even less experienced staff are able to create at least a good starting point for further optimization in a short time. However, for particular plans, experienced planners perform even better than scripts or AP. Experienced-user input is needed when setting up scripts or AP templates for the first time. Moreover, some minor drawbacks exist, such as the increase of monitor units (+35.5% for scripted plans). On average, automatically created plans are superior to manually created treatment plans. For particular plans, experienced physicists were able to perform better than scripts or AP; thus, the benefit is greatest when time is short or staff inexperienced.

  3. The Emergence of Agent-Based Technology as an Architectural Component of Serious Games

    NASA Technical Reports Server (NTRS)

    Phillips, Mark; Scolaro, Jackie; Scolaro, Daniel

    2010-01-01

    The evolution of games as an alternative to traditional simulations in the military context has been gathering momentum over the past five years, even though the exploration of their use in the serious sense has been ongoing since the mid-nineties. Much of the focus has been on the aesthetics of the visuals provided by the core game engine as well as the artistry provided by talented development teams to produce not only breathtaking artwork, but highly immersive game play. Consideration of game technology is now so much a part of the modeling and simulation landscape that it is becoming difficult to distinguish traditional simulation solutions from game-based approaches. But games have yet to provide the much needed interactive free play that has been the domain of semi-autonomous forces (SAF). The component-based middleware architecture that game engines provide promises a great deal in terms of options for the integration of agent solutions to support the development of non-player characters that engage the human player without the deterministic nature of scripted behaviors. However, there are a number of hard-learned lessons on the modeling and simulation side of the equation that game developers have yet to learn, such as: correlation of heterogeneous systems, scalability of both terrain and numbers of non-player entities, and the bi-directional nature of simulation to game interaction provided by Distributed Interactive Simulation (DIS) and High Level Architecture (HLA).

  4. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  5. Early Market Site Identification Data

    DOE Data Explorer

    Levi Kilcher

    2016-04-01

    This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).

  6. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    PubMed

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  7. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  8. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.

    PubMed

    Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen

    2010-12-21

    There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.

  9. A simple proof of a lemma of Coleman

    NASA Astrophysics Data System (ADS)

    Saikia, A.

    2001-03-01

    Let p be an odd prime. The results in this paper concern the units of the infinite extension of Qp generated by all p-power roots of unity. Letformula herewhere [mu]pn+1 denote the pn+1th roots of 1. Let [script p]n be the maximal ideal of the ring of integers of [Phi]n and let Un be the units congruent to 1 modulo [script p]n.Let [zeta]n be a fixed primitive pn+1th root of unity such that [zeta]pn = [zeta]n [minus sign] 1, [for all]n [gt-or-equal, slanted] 1. Put [pi]n = [zeta]n [minus sign] 1. Thus [pi]n is a local parameter for [Phi]n. Letformula hereKummer already exploited the obvious fact that every u0 [set membership] U0 can be written in the formformula herewhere f0(T) is some power series in Zp[[T

  10. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  11. AUX: a scripting language for auditory signal processing and software packages for psychoacoustic experiments and education.

    PubMed

    Kwon, Bomjun J

    2012-06-01

    This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community-both those with programming backgrounds and those without.

  12. Collagen based magnetic nanocomposites for oil removal applications

    PubMed Central

    Thanikaivelan, Palanisamy; Narayanan, Narayanan T.; Pradhan, Bhabendra K.; Ajayan, Pulickel M.

    2012-01-01

    A stable magnetic nanocomposite of collagen and superparamagnetic iron oxide nanoparticles (SPIONs) is prepared by a simple process utilizing protein wastes from leather industry. Molecular interaction between helical collagen fibers and spherical SPIONs is proven through calorimetric, microscopic and spectroscopic techniques. This nanocomposite exhibited selective oil absorption and magnetic tracking ability, allowing it to be used in oil removal applications. The environmental sustainability of the oil adsorbed nanobiocomposite is also demonstrated here through its conversion into a bi-functional graphitic nanocarbon material via heat treatment. The approach highlights new avenues for converting bio-wastes into useful nanomaterials in scalable and inexpensive ways. PMID:22355744

  13. The IATH ELAN Text-Sync Tool: A Simple System for Mobilizing ELAN Transcripts On- or Off-Line

    ERIC Educational Resources Information Center

    Dobrin, Lise M.; Ross, Douglas

    2017-01-01

    In this article we present the IATH ELAN Text-Sync Tool (ETST; see http://community.village.virginia.edu/etst), a series of scripts and workflow for playing ELAN files and associated audiovisual media in a web browser either on- or off-line. ELAN has become an indispensable part of documentary linguists' toolkit, but it is less than ideal for…

  14. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  15. A Review of the Use of Script-Based Tracking in CALL Research for Data Sharing: Applications Providing Meaning Aids

    ERIC Educational Resources Information Center

    Hwu, Fenfang

    2013-01-01

    Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…

  16. Disrupting Racialized Institutional Scripts: Toward Parent-Teacher Transformative Agency for Educational Justice

    ERIC Educational Resources Information Center

    Ishimaru, Ann M.; Takahashi, Sola

    2017-01-01

    Partnerships between teachers and parents from nondominant communities hold promise for reducing race- and class-based educational disparities, but the ways families and teachers work together often fall short of delivering systemic change. Racialized institutional scripts provide "taken-for-granted" norms, expectations, and assumptions…

  17. Supporting Component-Based Courseware Development Using Virtual Apparatus Framework Script.

    ERIC Educational Resources Information Center

    Ip, Albert; Fritze, Paul

    This paper reports on the latest development of the Virtual Apparatus (VA) framework, a contribution to efforts at the University of Melbourne (Australia) to mainstream content and pedagogical functions of curricula. The integration of the educational content and pedagogical functions of learning components using an XML compatible script,…

  18. Script Concordance Testing in Continuing Professional Development: Local or International Reference Panels?

    ERIC Educational Resources Information Center

    Pleguezuelos, E. M.; Hornos, E.; Dory, V.; Gagnon, R.; Malagrino, P.; Brailovsky, C. A.; Charlin, B.

    2013-01-01

    Context: The PRACTICUM Institute has developed large-scale international programs of on-line continuing professional development (CPD) based on self-testing and feedback using the Practicum Script Concordance Test© (PSCT). Aims: To examine the psychometric consequences of pooling the responses of panelists from different countries (composite…

  19. Promoting Critical, Elaborative Discussions through a Collaboration Script and Argument Diagrams

    ERIC Educational Resources Information Center

    Scheuer, Oliver; McLaren, Bruce M.; Weinberger, Armin; Niebuhr, Sabine

    2014-01-01

    During the past two decades a variety of approaches to support argumentation learning in computer-based learning environments have been investigated. We present an approach that combines argumentation diagramming and collaboration scripts, two methods successfully used in the past individually. The rationale for combining the methods is to…

  20. An Accessible User Interface for Geoscience and Programming

    NASA Astrophysics Data System (ADS)

    Sevre, E. O.; Lee, S.

    2012-12-01

    The goal of this research is to develop an interface that will simplify user interaction with software for scientists. The motivating factor of the research is to develop tools that assist scientists with limited motor skills with the efficient generation and use of software tools. Reliance on computers and programming is increasing in the world of geology, and it is increasingly important for geologists and geophysicists to have the computational resources to use advanced software and edit programs for their research. I have developed a prototype of a program to help geophysicists write programs using a simple interface that requires only simple single-mouse-clicks to input code. It is my goal to minimize the amount of typing necessary to create simple programs and scripts to increase accessibility for people with disabilities limiting fine motor skills. This interface can be adapted for various programming and scripting languages. Using this interface will simplify development of code for C/C++, Java, and GMT, and can be expanded to support any other text based programming language. The interface is designed around the concept of maximizing the amount of code that can be written using a minimum number of clicks and typing. The screen is split into two sections: a list of click-commands is on the left hand side, and a text area is on the right hand side. When the user clicks on a command on the left hand side the applicable code is automatically inserted at the insertion point in the text area. Currently in the C/C++ interface, there are commands for common code segments that are often used, such as for loops, comments, print statements, and structured code creation. The primary goal is to provide an interface that will work across many devices for developing code. A simple prototype has been developed for the iPad. Due to the limited number of devices that an iOS application can be used with, the code has been re-written in Java to run on a wider range of devices. Currently, the software works in a prototype mode, and it is our goal to further development to create software that can benefit a wide range of people working in geosciences, which will make code development practical and accessible for a wider audience of scientists. By using an interface like this, it reduces potential for errors by reusing known working code.

  1. Molecular structure input on the web.

    PubMed

    Ertl, Peter

    2010-02-02

    A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  2. SoS Notebook: An Interactive Multi-Language Data Analysis Environment.

    PubMed

    Peng, Bo; Wang, Gao; Ma, Jun; Leong, Man Chong; Wakefield, Chris; Melott, James; Chiu, Yulun; Du, Di; Weinstein, John N

    2018-05-22

    Complex bioinformatic data analysis workflows involving multiple scripts in different languages can be difficult to consolidate, share, and reproduce. An environment that streamlines the entire processes of data collection, analysis, visualization and reporting of such multi-language analyses is currently lacking. We developed Script of Scripts (SoS) Notebook, a web-based notebook environment that allows the use of multiple scripting language in a single notebook, with data flowing freely within and across languages. SoS Notebook enables researchers to perform sophisticated bioinformatic analysis using the most suitable tools for different parts of the workflow, without the limitations of a particular language or complications of cross-language communications. SoS Notebook is hosted at http://vatlab.github.io/SoS/ and is distributed under a BSD license. bpeng@mdanderson.org.

  3. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  4. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  5. Solid State Inflation Balloon Active Deorbiter: Scalable Low-Cost Deorbit System for Small Satellites

    NASA Technical Reports Server (NTRS)

    Huang, Adam

    2016-01-01

    The goal of the Solid State Inflation Balloon Active Deorbiter project is to develop and demonstrate a scalable, simple, reliable, and low-cost active deorbiting system capable of controlling the downrange point of impact for the full-range of small satellites from 1 kg to 180 kg. The key enabling technology being developed is the Solid State Gas Generator (SSGG) chip, generating pure nitrogen gas from sodium azide (NaN3) micro-crystals. Coupled with a metalized nonelastic drag balloon, the complete Solid State Inflation Balloon (SSIB) system is capable of repeated inflation/deflation cycles. The SSGG minimizes size, weight, electrical power, and cost when compared to the current state of the art.

  6. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    PubMed Central

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945

  7. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.

    PubMed

    Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz

    2012-09-24

    The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.

  8. Sexual scripts and sexual risk behaviors among Black heterosexual men: development of the Sexual Scripts Scale.

    PubMed

    Bowleg, Lisa; Burkholder, Gary J; Noar, Seth M; Teti, Michelle; Malebranche, David J; Tschann, Jeanne M

    2015-04-01

    Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men's HIV risk.

  9. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  10. The NOAO NEWFIRM Data Handling System

    NASA Astrophysics Data System (ADS)

    Zárate, N.; Fitzpatrick, M.

    2008-08-01

    The NOAO Extremely Wide-Field IR Mosaic (NEWFIRM) is a new 1-2.4 micron IR camera that is now being commissioned for the 4m Mayall telescope at Kitt Peak. The focal plane consists of a 2x2 mosaic of 2048x2048 arrays offerring a field-of-view of 27.6' on a side. The use of dual MONSOON array controllers permits very fast readout, a scripting interface allows for highly efficient observing modes. We describe the Data Handling System (DHS) for the NEWFIRM camera which is designed to meet the performance requirements of the instrument as well as the observing environment in which in operates. It is responsible for receiving the data stream from the detector and instrument software, rectifying the image geometry, presenting a real-time display of the image to the user, final assembly of a science-grade image with complete headers, as well as triggering automated pipeline and archival functions. The DHS uses an event-based messaging system to control multiple processes on a distributed network of machines. The asynchronous nature of this processing means the DHS operates independently from the camera readout and the design of the system is inherently scalable to larger focal planes that use a greater number of array controllers. Current status and future plans for the DHS are also discussed.

  11. Sexual Scripts and Sexual Risk Behaviors among Black Heterosexual Men: Development of the Sexual Scripts Scale

    PubMed Central

    Bowleg, Lisa; Burkholder, Gary J.; Noar, Seth M.; Teti, Michelle; Malebranche, David J.; Tschann, Jeanne M.

    2014-01-01

    Sexual scripts are widely shared gender and culture-specific guides for sexual behavior with important implications for HIV prevention. Although several qualitative studies document how sexual scripts may influence sexual risk behaviors, quantitative investigations of sexual scripts in the context of sexual risk are rare. This mixed methods study involved the qualitative development and quantitative testing of the Sexual Scripts Scale (SSS). Study 1 included qualitative semi-structured interviews with 30 Black heterosexual men about sexual experiences with main and casual sex partners to develop the SSS. Study 2 included a quantitative test of the SSS with 526 predominantly low-income Black heterosexual men. A factor analysis of the SSS resulted in a 34-item, seven-factor solution that explained 68% of the variance. The subscales and coefficient alphas were: Romantic Intimacy Scripts (α = .86), Condom Scripts (α = .82), Alcohol Scripts (α = .83), Sexual Initiation Scripts (α = .79), Media Sexual Socialization Scripts (α = .84), Marijuana Scripts (α = .85), and Sexual Experimentation Scripts (α = .84). Among men who reported a main partner (n = 401), higher Alcohol Scripts, Media Sexual Socialization Scripts, and Marijuana Scripts scores, and lower Condom Scripts scores were related to more sexual risk behavior. Among men who reported at least one casual partner (n = 238), higher Romantic Intimacy Scripts, Sexual Initiation Scripts, and Media Sexual Socialization Scripts, and lower Condom Scripts scores were related to higher sexual risk. The SSS may have considerable utility for future research on Black heterosexual men’s HIV risk. PMID:24311105

  12. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    PubMed

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be transferred to and from other computational environments for debugging or faster processing. This focus on 'on the fly' analysis sets CoreFlow apart from other workflow applications that require wrapping of scripts into particular formats and development of specific user interfaces. Importantly, current and future releases of data analysis scripts in CoreFlow format will be of widespread benefit to the proteomics community, not only for uptake and use in individual labs, but to enable full scrutiny of all analysis steps, thus increasing experimental reproducibility and decreasing errors. This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes? Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  14. Construction and utilization of a script concordance test as an assessment tool for DCEM3 (5th year) medical students in rheumatology.

    PubMed

    Mathieu, Sylvain; Couderc, Marion; Glace, Baptiste; Tournadre, Anne; Malochet-Guinamand, Sandrine; Pereira, Bruno; Dubost, Jean-Jacques; Soubrier, Martin

    2013-12-13

    The script concordance test (SCT) is a method for assessing clinical reasoning of medical students by placing them in a context of uncertainty such as they will encounter in their future daily practice. Script concordance testing is going to be included as part of the computer-based national ranking examination (iNRE).This study was designed to create a script concordance test in rheumatology and use it for DCEM3 (fifth year) medical students administered via the online platform of the Clermont-Ferrand medical school. Our SCT for rheumatology teaching was constructed by a panel of 19 experts in rheumatology (6 hospital-based and 13 community-based). One hundred seventy-nine DCEM3 (fifth year) medical students were invited to take the test. Scores were computed using the scoring key available on the University of Montreal website. Reliability of the test was estimated by the Cronbach alpha coefficient for internal consistency. The test comprised 60 questions. Among the 26 students who took the test (26/179: 14.5%), 15 completed it in its entirety. The reference panel of rheumatologists obtained a mean score of 76.6 and the 15 students had a mean score of 61.5 (p = 0.001). The Cronbach alpha value was 0.82. An online SCT can be used as an assessment tool for medical students in rheumatology. This study also highlights the active participation of community-based rheumatologists, who accounted for the majority of the 19 experts in the reference panel.A script concordance test in rheumatology for 5th year medical students.

  15. Reusing Information Management Services for Recommended Decadal Study Missions That Facilitate Aerosol and Cloud Studies

    NASA Astrophysics Data System (ADS)

    Alcott, G.; Kempler, S.; Lynnes, C.; Leptoukh, G.; Vollmer, B.; Berrick, S.

    2008-12-01

    NASA Earth Sciences Division (ESD), and its preceding Earth science organizations, has made great investments in the development and maintenance of data management systems, as well as information technologies, for the purpose of maximizing the use and usefulness of NASA generated Earth science data. Earth science information systems, evolving with the maturation and implementation of advancing technologies, reside at NASA data centers, known as Distributed Active Archive Centers (DAACs). With information management system infrastructure in place, and system data and user services already developed and operational, only very small delta costs are required to fully support data archival, processing, and data support services required by the recommended Decadal Study missions. This presentation describes the services and capabilities of the Goddard Space Flight Center (GSFC) Earth Sciences Data and Information Services Center (GES DISC) (one of NASAs DAACs) and their potential reuse for these future missions. After 14 years working with instrument teams and the broader science community, GES DISC personnel expertise in atmospheric, water cycle, and atmospheric modeling data and information services, as well as Earth science missions, information system engineering, operations, and user services have developed a series of modular, reusable data management components currently is use in several projects. The knowledge and experience gained at the GES DISC lend themselves to providing science driven information systems in the areas of aerosols, clouds, and atmospheric chemicals to be measured by recommended Decadal Survey missions. Available reusable capabilities include data archive and distribution (Simple, Scalable, Script-based, Science [S4] Product Archive aka S4PA), data processing (S4 Processor for Measurements aka S4PM), data search (Mirador), data browse, visualization, and analysis (Giovanni), and data mining services. In addition, recent enhancements, such as Open Geospatial Consortium (OGC), Inc. interoperability implementations and data fusion prototypes, will be described. As a result of the information management systems developed by NASAs GES DISC, not only are large cost savings realized through system reuse, but maintenance costs are also minimized due to the simplicity of their implementations.

  16. Near-line Archive Data Mining at the Goddard Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Pham, L.; Mack, R.; Eng, E.; Lynnes, C.

    2002-12-01

    NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.

  17. Transformation of Topic-Specific Professional Knowledge into Personal Pedagogical Content Knowledge through Lesson Planning

    ERIC Educational Resources Information Center

    Stender, Anita; Brückmann, Maja; Neumann, Knut

    2017-01-01

    This study investigates the relationship between two different types of pedagogical content knowledge (PCK): the topic-specific professional knowledge (TSPK) and practical routines, so-called teaching scripts. Based on the Transformation Model of Lesson Planning, we assume that teaching scripts originate from a transformation of TSPK during lesson…

  18. MoleculaRnetworks: an integrated graph theoretic and data mining tool to explore solvent organization in molecular simulation.

    PubMed

    Mooney, Barbara Logan; Corrales, L René; Clark, Aurora E

    2012-03-30

    This work discusses scripts for processing molecular simulations data written using the software package R: A Language and Environment for Statistical Computing. These scripts, named moleculaRnetworks, are intended for the geometric and solvent network analysis of aqueous solutes and can be extended to other H-bonded solvents. New algorithms, several of which are based on graph theory, that interrogate the solvent environment about a solute are presented and described. This includes a novel method for identifying the geometric shape adopted by the solvent in the immediate vicinity of the solute and an exploratory approach for describing H-bonding, both based on the PageRank algorithm of Google search fame. The moleculaRnetworks codes include a preprocessor, which distills simulation trajectories into physicochemical data arrays, and an interactive analysis script that enables statistical, trend, and correlation analysis, and other data mining. The goal of these scripts is to increase access to the wealth of structural and dynamical information that can be obtained from molecular simulations. Copyright © 2012 Wiley Periodicals, Inc.

  19. Predictors of Physical Altercation among Adolescents in Residential Substance Abuse Treatment

    PubMed Central

    Crawley, Rachel D.; Becan, Jennifer Edwards; Knight, Danica Kalling; Joe, George W.; Flynn, Patrick M.

    2014-01-01

    This study tested the hypothesis that basic social information-processing components represented by family conflict, peer aggression, and pro-aggression cognitive scripts are related to aggression and social problems among adolescents in substance abuse treatment. The sample consisted of 547 adolescents in two community-based residential facilities. Correlation results indicated that more peer aggression is related to more pro-aggression scripts; scripts, peer aggression, and family conflict are associated with social problems; and in-treatment physical altercation involvement is predicted by higher peer aggression. Findings suggest that social information-processing components are valuable for treatment research. PMID:26622072

  20. Equalizer: a scalable parallel rendering framework.

    PubMed

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  1. Adolescents' sexual scripts: schematic representations of consensual and nonconsensual heterosexual interactions.

    PubMed

    Krahé, Barbara; Bieneck, Steffen; Scheinberger-Olwig, Renate

    2007-11-01

    The characteristic features of adolescents' sexual scripts were explored in 400 tenth and eleventh graders from Berlin, Germany. Participants rated the prototypical elements of three scripts for heterosexual interactions: (1) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to adolescents in general (general script); (2) the prototypical script for the first consensual sexual intercourse with a new partner as pertaining to themselves personally (individual script); and (3) the script for a nonconsensual sexual intercourse (rape script). Compared with the general script for the age group as a whole, the individual script contained fewer risk elements related to sexual aggression and portrayed more positive consequences of the sexual interaction. Few gender differences were found, and coital experience did not affect sexual scripts. The rape script was found to be close to the "real rape stereotype." The findings are discussed with respect to the role of sexual scripts as guidelines for behavior, particularly in terms of their significance for the prediction of sexual aggression.

  2. SLURM: Simplex Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  3. A Simple, Scalable Synthetic Route to (+)- and (−)-Pseudoephenamine

    PubMed Central

    Mellem, Kevin T.

    2013-01-01

    A three-step synthesis of pseudoephenamine suitable for preparing multigram amounts of both enantiomers of the auxiliary from the inexpensive starting material benzil is described. The sequence involves synthesis of the crystalline mono-methylimine derivative of benzil, reduction of that substance with lithium aluminum hydride, and resolution of pseudoephenamine with mandelic acid. PMID:24138164

  4. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  5. The fabrication of vertically aligned and periodically distributed carbon nanotube bundles and periodically porous carbon nanotube films through a combination of laser interference ablation and metal-catalyzed chemical vapor deposition.

    PubMed

    Yuan, Dajun; Lin, Wei; Guo, Rui; Wong, C P; Das, Suman

    2012-06-01

    Scalable fabrication of carbon nanotube (CNT) bundles is essential to future advances in several applications. Here, we report on the development of a simple, two-step method for fabricating vertically aligned and periodically distributed CNT bundles and periodically porous CNT films at the sub-micron scale. The method involves laser interference ablation (LIA) of an iron film followed by CNT growth via iron-catalyzed chemical vapor deposition. CNT bundles with square widths ranging from 0.5 to 1.5 µm in width, and 50-200 µm in length, are grown atop the patterned catalyst over areas spanning 8 cm(2). The CNT bundles exhibit a high degree of control over square width, orientation, uniformity, and periodicity. This simple scalable method of producing well-placed and oriented CNT bundles demonstrates a high application potential for wafer-scale integration of CNT structures into various device applications, including IC interconnects, field emitters, sensors, batteries, and optoelectronics, etc.

  6. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  7. Masking as an effective quality control method for next-generation sequencing data analysis.

    PubMed

    Yun, Sajung; Yun, Sijung

    2014-12-13

    Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).

  8. Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.

    PubMed

    Muin, Michael; Fontelo, Paul

    2006-11-03

    The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.

  9. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    PubMed Central

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2015-01-01

    Based on a sub-sample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this paper reports data from a follow-up assessment at age 18 years on the antecedents of secure base script knowledge, as reflected in the ability to generate narratives in which attachment-related difficulties are recognized, competent help is provided, and the problem is resolved. Secure base script knowledge was (a) modestly to moderately correlated with more well established assessments of adult attachment, (b) associated with mother-child attachment in the first three years of life and with observations of maternal and paternal sensitivity from childhood to adolescence, and (c) partially accounted for associations previously documented in the SECCYD cohort between early caregiving experiences and Adult Attachment Interview states of mind (Booth-LaForce & Roisman, 2014) as well as self-reported attachment styles (Fraley, Roisman, Booth-LaForce, Owen, & Holland, 2013). PMID:25264703

  10. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  11. Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Chandan Jyoti; Kalita, Sanjib Kr.

    Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.

  12. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  13. Real-time Data Access to First Responders: A VORB application

    NASA Astrophysics Data System (ADS)

    Lu, S.; Kim, J. B.; Bryant, P.; Foley, S.; Vernon, F.; Rajasekar, A.; Meier, S.

    2006-12-01

    Getting information to first responders is not an easy task. The sensors that provide the information are diverse in formats and come from many disciplines. They are also distributed by location, transmit data at different frequencies and are managed and owned by autonomous administrative entities. Pulling such types of data in real-time, needs a very robust sensor network with reliable data transport and buffering capabilities. Moreover, the system should be extensible and scalable in numbers and sensor types. ROADNet is a real- time sensor network project at UCSD gathering diverse environmental data in real-time or near-real-time. VORB (Virtual Object Ring Buffer) is the middleware used in ROADNet offering simple, uniform and scalable real-time data management for discovering (through metadata), accessing and archiving real-time data and data streams. Recent development in VORB, a web API, has offered quick and simple real-time data integration with web applications. In this poster, we discuss one application developed as part of ROADNet. SMER (Santa Margarita Ecological Reserve) is located in interior Southern California, a region prone to catastrophic wildfires each summer and fall. To provide data during emergencies, we have applied the VORB framework to develop a web-based application for providing access to diverse sensor data including weather data, heat sensor information, and images from cameras. Wildfire fighters have access to real-time data about weather and heat conditions in the area and view pictures taken from cameras at multiple points in the Reserve to pinpoint problem areas. Moreover, they can browse archived images and sensor data from earlier times to provide a comparison framework. To show scalability of the system, we have expanded the sensor network under consideration through other areas in Southern California including sensors accessible by Los Angeles County Fire Department (LACOFD) and those available through the High Performance Wireless Research and Education Network (HPWREN). The poster will discuss the system architecture and components, the types of sensor being used and usage scenarios. The system is currently operational through the SMER web-site.

  14. Metal-Free Oxidation of Primary Amines to Nitriles through Coupled Catalytic Cycles.

    PubMed

    Lambert, Kyle M; Bobbitt, James M; Eldirany, Sherif A; Kissane, Liam E; Sheridan, Rose K; Stempel, Zachary D; Sternberg, Francis H; Bailey, William F

    2016-04-04

    Synergism among several intertwined catalytic cycles allows for selective, room temperature oxidation of primary amines to the corresponding nitriles in 85-98% isolated yield. This metal-free, scalable, operationally simple method employs a catalytic quantity of 4-acetamido-TEMPO (ACT; TEMPO=2,2,6,6-tetramethylpiperidine N-oxide) radical and the inexpensive, environmentally benign triple salt oxone as the terminal oxidant under mild conditions. Simple filtration of the reaction mixture through silica gel affords pure nitrile products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Detection of Sub-fM DNA with Target Recycling and Self-Assembly Amplification on Graphene Field-Effect Biosensors

    PubMed Central

    2018-01-01

    All-electronic DNA biosensors based on graphene field-effect transistors (GFETs) offer the prospect of simple and cost-effective diagnostics. For GFET sensors based on complementary probe DNA, the sensitivity is limited by the binding affinity of the target oligonucleotide, in the nM range for 20 mer targets. We report a ∼20 000× improvement in sensitivity through the use of engineered hairpin probe DNA that allows for target recycling and hybridization chain reaction. This enables detection of 21 mer target DNA at sub-fM concentration and provides superior specificity against single-base mismatched oligomers. The work is based on a scalable fabrication process for biosensor arrays that is suitable for multiplexed detection. This approach overcomes the binding-affinity-dependent sensitivity of nucleic acid biosensors and offers a pathway toward multiplexed and label-free nucleic acid testing with high accuracy and selectivity. PMID:29768011

  16. Detection of Sub-fM DNA with Target Recycling and Self-Assembly Amplification on Graphene Field-Effect Biosensors.

    PubMed

    Gao, Zhaoli; Xia, Han; Zauberman, Jonathan; Tomaiuolo, Maurizio; Ping, Jinglei; Zhang, Qicheng; Ducos, Pedro; Ye, Huacheng; Wang, Sheng; Yang, Xinping; Lubna, Fahmida; Luo, Zhengtang; Ren, Li; Johnson, Alan T Charlie

    2018-06-13

    All-electronic DNA biosensors based on graphene field-effect transistors (GFETs) offer the prospect of simple and cost-effective diagnostics. For GFET sensors based on complementary probe DNA, the sensitivity is limited by the binding affinity of the target oligonucleotide, in the nM range for 20 mer targets. We report a ∼20 000× improvement in sensitivity through the use of engineered hairpin probe DNA that allows for target recycling and hybridization chain reaction. This enables detection of 21 mer target DNA at sub-fM concentration and provides superior specificity against single-base mismatched oligomers. The work is based on a scalable fabrication process for biosensor arrays that is suitable for multiplexed detection. This approach overcomes the binding-affinity-dependent sensitivity of nucleic acid biosensors and offers a pathway toward multiplexed and label-free nucleic acid testing with high accuracy and selectivity.

  17. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    PubMed Central

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579

  18. Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit

    DOE PAGES

    Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...

    2017-04-26

    Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less

  19. Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin

    Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less

  20. Metasurface-based anti-reflection coatings at optical frequencies

    NASA Astrophysics Data System (ADS)

    Monti, Alessio; Alù, Andrea; Toscano, Alessandro; Bilotti, Filiberto

    2018-05-01

    In this manuscript, we propose a metasurface approach for the reduction of electromagnetic reflection from an arbitrary air‑dielectric interface. The proposed technique exploits the exotic optical response of plasmonic nanoparticles to achieve complete cancellation of the field reflected by a dielectric substrate by means of destructive interference. Differently from other, earlier anti-reflection approaches based on nanoparticles, our design scheme is supported by a simple transmission-line formulation that allows a closed-form characterization of the anti-reflection performance of a nanoparticle array. Furthermore, since the working principle of the proposed devices relies on an average effect that does not critically depend on the array geometry, our approach enables low-cost production and easy scalability to large sizes. Our theoretical considerations are supported by full-wave simulations confirming the effectiveness of this design principle.

  1. Ontology and modeling patterns for state-based behavior representation

    NASA Technical Reports Server (NTRS)

    Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.; hide

    2015-01-01

    This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.

  2. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R J

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less

  4. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  5. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  6. Drama as Arts-Based Pedagogy and Research: Media Advertising and Inner-City Youth.

    ERIC Educational Resources Information Center

    Conrad, Diane

    2002-01-01

    A media unit for inner city high school students examined the relationship between youth and advertising by using drama as the medium through which learning and research occurred. Data were presented through scripted dramatic scenes. How the interpretation and generation of data were embedded in the process of writing these scripts is explained.…

  7. A Model for Flexibly Editing CSCL Scripts

    ERIC Educational Resources Information Center

    Sobreira, Pericles; Tchounikine, Pierre

    2012-01-01

    This article presents a model whose primary concern and design rationale is to offer users (teachers) with basic ICT skills an intuitive, easy, and flexible way of editing scripts. The proposal is based on relating an end-user representation as a table and a machine model as a tree. The table-tree model introduces structural expressiveness and…

  8. A Courseware to Script Animated Pedagogical Agents in Instructional Material for Elementary Students in English Education

    ERIC Educational Resources Information Center

    Hong, Zeng-Wei; Chen, Yen-Lin; Lan, Chien-Ho

    2014-01-01

    Animated agents are virtual characters who demonstrate facial expressions, gestures, movements, and speech to facilitate students' engagement in the learning environment. Our research developed a courseware that supports a XML-based markup language and an authoring tool for teachers to script animated pedagogical agents in teaching materials. The…

  9. Comparing the Effects of Augmented Reality Phonics and Scripted Phonics Approaches on Achievement of At-Risk Kindergarten Students

    ERIC Educational Resources Information Center

    Ladd, Melissa

    2016-01-01

    This study strived to determine the effectiveness of the AR phonics program relative to the effectiveness of the scripted phonics program for developing the letter identification, sound verbalization, and blending abilities of kindergarten students considered at-risk based on state assessments. The researcher was interested in pretest and posttest…

  10. The Call for Cultural Responsiveness: Teachers' Perceptions about the Interplay between Culturally Responsive Instruction and Scripted Curricula

    ERIC Educational Resources Information Center

    Toppel, Kathryn Elizabeth

    2013-01-01

    The increased focus on the implementation of scientifically research-based instruction as an outcome of No Child Left Behind ("Understanding NCLB," 2007) has resulted in the widespread use of scripted reading curricula (Dewitz, Leahy, Jones, and Sullivan, 2010), which typically represents Eurocentric and middle class forms of discourse,…

  11. Effectiveness of medicines review with web-based pharmaceutical treatment algorithms in reducing potentially inappropriate prescribing in older people in primary care: a cluster randomized trial (OPTI-SCRIPT study protocol).

    PubMed

    Clyne, Barbara; Bradley, Marie C; Smith, Susan M; Hughes, Carmel M; Motterlini, Nicola; Clear, Daniel; McDonnell, Ronan; Williams, David; Fahey, Tom

    2013-03-13

    Potentially inappropriate prescribing in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. In Ireland, 36% of those aged 70 years or over received at least one potentially inappropriate medication, with an associated expenditure of over €45 million.The main objective of this study is to determine the effectiveness and acceptability of a complex, multifaceted intervention in reducing the level of potentially inappropriate prescribing in primary care. This study is a pragmatic cluster randomized controlled trial, conducted in primary care (OPTI-SCRIPT trial), involving 22 practices (clusters) and 220 patients. Practices will be allocated to intervention or control arms using minimization, with intervention participants receiving a complex multifaceted intervention incorporating academic detailing, medicines review with web-based pharmaceutical treatment algorithms that provide recommended alternative treatment options, and tailored patient information leaflets. Control practices will deliver usual care and receive simple patient-level feedback on potentially inappropriate prescribing. Routinely collected national prescribing data will also be analyzed for nonparticipating practices, acting as a contemporary national control. The primary outcomes are the proportion of participant patients with potentially inappropriate prescribing and the mean number of potentially inappropriate prescriptions per patient. In addition, economic and qualitative evaluations will be conducted. This study will establish the effectiveness of a multifaceted intervention in reducing potentially inappropriate prescribing in older people in Irish primary care that is generalizable to countries with similar prescribing challenges. Current controlled trials ISRCTN41694007.

  12. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    PubMed

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  13. Scalable Manufacturing of Plasmonic Nanodisk Dimers and Cusp Nanostructures using Salting-out Quenching Method and Colloidal Lithography

    PubMed Central

    Juluri, Bala Krishna; Chaturvedi, Neetu; Hao, Qingzhen; Lu, Mengqian; Velegol, Darrell; Jensen, Lasse; Huang, Tony Jun

    2014-01-01

    Localization of large electric fields in plasmonic nanostructures enables various processes such as single molecule detection, higher harmonic light generation, and control of molecular fluorescence and absorption. High-throughput, simple nanofabrication techniques are essential for implementing plasmonic nanostructures with large electric fields for practical applications. In this article we demonstrate a scalable, rapid, and inexpensive fabrication method based on the salting-out quenching technique and colloidal lithography for the fabrication of two types of nanostructures with large electric field: nanodisk dimers and cusp nanostructures. Our technique relies on fabricating polystyrene doublets from single beads by controlled aggregation and later using them as soft masks to fabricate metal nanodisk dimers and nanocusp structures. Both of these structures have a well-defined geometry for the localization of large electric fields comparable to structures fabricated by conventional nanofabrication techniques. We also show that various parameters in the fabrication process can be adjusted to tune the geometry of the final structures and control their plasmonic properties. With advantages in throughput, cost, and geometric tunability, our fabrication method can be valuable in many applications that require plasmonic nanostructures with large electric fields. PMID:21692473

  14. Aqueous-Processed, High-Capacity Electrodes for Membrane Capacitive Deionization.

    PubMed

    Jain, Amit; Kim, Jun; Owoseni, Oluwaseye M; Weathers, Cierra; Caña, Daniel; Zuo, Kuichang; Walker, W Shane; Li, Qilin; Verduzco, Rafael

    2018-05-15

    Membrane capacitive deionization (MCDI) is a low-cost technology for desalination. Typically, MCDI electrodes are fabricated using a slurry of nanoparticles in an organic solvent along with polyvinylidene fluoride (PVDF) polymeric binder. Recent studies of the environmental impact of CDI have pointed to the organic solvents used in the fabrication of CDI electrodes as key contributors to the overall environmental impact of the technology. Here, we report a scalable, aqueous processing approach to prepare MCDI electrodes using water-soluble polymer poly(vinyl alcohol) (PVA) as a binder and ion-exchange polymer. Electrodes are prepared by depositing aqueous slurry of activated carbon and PVA binder followed by coating with a thin layer of PVA-based cation- or anion-exchange polymer. When coated with ion-exchange layers, the PVA-bound electrodes exhibit salt adsorption capacities up to 14.4 mg/g and charge efficiencies up to 86.3%, higher than typically achieved for activated carbon electrodes with a hydrophobic polymer binder and ion-exchange membranes (5-13 mg/g). Furthermore, when paired with low-resistance commercial ion-exchange membranes, salt adsorption capacities exceed 18 mg/g. Our overall approach demonstrates a simple, environmentally friendly, cost-effective, and scalable method for the fabrication of high-capacity MCDI electrodes.

  15. A molecular quantum spin network controlled by a single qubit.

    PubMed

    Schlipf, Lukas; Oeckinghaus, Thomas; Xu, Kebiao; Dasari, Durga Bhaktavatsala Rao; Zappe, Andrea; de Oliveira, Felipe Fávaro; Kern, Bastian; Azarkh, Mykhailo; Drescher, Malte; Ternes, Markus; Kern, Klaus; Wrachtrup, Jörg; Finkler, Amit

    2017-08-01

    Scalable quantum technologies require an unprecedented combination of precision and complexity for designing stable structures of well-controllable quantum systems on the nanoscale. It is a challenging task to find a suitable elementary building block, of which a quantum network can be comprised in a scalable way. We present the working principle of such a basic unit, engineered using molecular chemistry, whose collective control and readout are executed using a nitrogen vacancy (NV) center in diamond. The basic unit we investigate is a synthetic polyproline with electron spins localized on attached molecular side groups separated by a few nanometers. We demonstrate the collective readout and coherent manipulation of very few (≤ 6) of these S = 1/2 electronic spin systems and access their direct dipolar coupling tensor. Our results show that it is feasible to use spin-labeled peptides as a resource for a molecular qubit-based network, while at the same time providing simple optical readout of single quantum states through NV magnetometry. This work lays the foundation for building arbitrary quantum networks using well-established chemistry methods, which has many applications ranging from mapping distances in single molecules to quantum information processing.

  16. Soil CO2 flux from three ecosystems in tropical peatland of Sarawak, Malaysia

    NASA Astrophysics Data System (ADS)

    Melling, Lulie; Hatano, Ryusuke; Goh, Kah Joo

    2005-02-01

    Soil CO2 flux was measured monthly over a year from tropical peatland of Sarawak, Malaysia using a closed-chamber technique. The soil CO2 flux ranged from 100 to 533 mg C m-2 h-1 for the forest ecosystem, 63 to 245 mg C m-2 h-1 for the sago and 46 to 335 mg C m-2 h-1 for the oil palm. Based on principal component analysis (PCA), the environmental variables over all sites could be classified into three components, namely, climate, soil moisture and soil bulk density, which accounted for 86% of the seasonal variability. A regression tree approach showed that CO2 flux in each ecosystem was related to different underlying environmental factors. They were relative humidity for forest, soil temperature at 5 cm for sago and water-filled pore space for oil palm. On an annual basis, the soil CO2 flux was highest in the forest ecosystem with an estimated production of 2.1 kg C m-2 yr-1 followed by oil palm at 1.5 kg C m-2 yr-1 and sago at 1.1 kg C m-2 yr-1. The different dominant controlling factors in CO2 flux among the studied ecosystems suggested that land use affected the exchange of CO2 between tropical peatland and the atmosphere.

  17. MATLAB tools for improved characterization and quantification of volcanic incandescence in Webcam imagery; applications at Kilauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren

    2010-01-01

    Webcams are now standard tools for volcano monitoring and are used at observatories in Alaska, the Cascades, Kamchatka, Hawai'i, Italy, and Japan, among other locations. Webcam images allow invaluable documentation of activity and provide a powerful comparative tool for interpreting other monitoring datastreams, such as seismicity and deformation. Automated image processing can improve the time efficiency and rigor of Webcam image interpretation, and potentially extract more information on eruptive activity. For instance, Lovick and others (2008) provided a suite of processing tools that performed such tasks as noise reduction, eliminating uninteresting images from an image collection, and detecting incandescence, with an application to dome activity at Mount St. Helens during 2007. In this paper, we present two very simple automated approaches for improved characterization and quantification of volcanic incandescence in Webcam images at Kilauea Volcano, Hawai`i. The techniques are implemented in MATLAB (version 2009b, Copyright: The Mathworks, Inc.) to take advantage of the ease of matrix operations. Incandescence is a useful indictor of the location and extent of active lava flows and also a potentially powerful proxy for activity levels at open vents. We apply our techniques to a period covering both summit and east rift zone activity at Kilauea during 2008?2009 and compare the results to complementary datasets (seismicity, tilt) to demonstrate their integrative potential. A great strength of this study is the demonstrated success of these tools in an operational setting at the Hawaiian Volcano Observatory (HVO) over the course of more than a year. Although applied only to Webcam images here, the techniques could be applied to any type of sequential images, such as time-lapse photography. We expect that these tools are applicable to many other volcano monitoring scenarios, and the two MATLAB scripts, as they are implemented at HVO, are included in the appendixes. These scripts would require minor to moderate modifications for use elsewhere, primarily to customize directory navigation. If the user has some familiarity with MATLAB, or programming in general, these modifications should be easy. Although we originally anticipated needing the Image Processing Toolbox, the scripts in the appendixes do not require it. Thus, only the base installation of MATLAB is needed. Because fairly basic MATLAB functions are used, we expect that the script can be run successfully by versions earlier than 2009b.

  18. Implementing a distributed intranet-based information system.

    PubMed

    O'Kane, K C; McColligan, E E; Davis, G A

    1996-11-01

    The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.

  19. snpTree--a web-server to identify and construct SNP trees from whole genome sequence data.

    PubMed

    Leekitcharoenphon, Pimlapas; Kaas, Rolf S; Thomsen, Martin Christen Frølund; Friis, Carsten; Rasmussen, Simon; Aarestrup, Frank M

    2012-01-01

    The advances and decreasing economical cost of whole genome sequencing (WGS), will soon make this technology available for routine infectious disease epidemiology. In epidemiological studies, outbreak isolates have very little diversity and require extensive genomic analysis to differentiate and classify isolates. One of the successfully and broadly used methods is analysis of single nucletide polymorphisms (SNPs). Currently, there are different tools and methods to identify SNPs including various options and cut-off values. Furthermore, all current methods require bioinformatic skills. Thus, we lack a standard and simple automatic tool to determine SNPs and construct phylogenetic tree from WGS data. Here we introduce snpTree, a server for online-automatic SNPs analysis. This tool is composed of different SNPs analysis suites, perl and python scripts. snpTree can identify SNPs and construct phylogenetic trees from WGS as well as from assembled genomes or contigs. WGS data in fastq format are aligned to reference genomes by BWA while contigs in fasta format are processed by Nucmer. SNPs are concatenated based on position on reference genome and a tree is constructed from concatenated SNPs using FastTree and a perl script. The online server was implemented by HTML, Java and python script.The server was evaluated using four published bacterial WGS data sets (V. cholerae, S. aureus CC398, S. Typhimurium and M. tuberculosis). The evaluation results for the first three cases was consistent and concordant for both raw reads and assembled genomes. In the latter case the original publication involved extensive filtering of SNPs, which could not be repeated using snpTree. The snpTree server is an easy to use option for rapid standardised and automatic SNP analysis in epidemiological studies also for users with limited bioinformatic experience. The web server is freely accessible at http://www.cbs.dtu.dk/services/snpTree-1.0/.

  20. An interactive HTML ocean nowcast GUI based on Perl and JavaScript

    NASA Astrophysics Data System (ADS)

    Sakalaukus, Peter J.; Fox, Daniel N.; Louise Perkins, A.; Smedstad, Lucy F.

    1999-02-01

    We describe the use of Hyper Text Markup Language (HTML), JavaScript code, and Perl I/O to create and validate forms in an Internet-based graphical user interface (GUI) for the Naval Research Laboratory (NRL) Ocean models and Assimilation Demonstration System (NOMADS). The resulting nowcast system can be operated from any compatible browser across the Internet, for although the GUI was prepared in a Netscape browser, it used no Netscape extensions. Code available at: http://www.iamg.org/CGEditor/index.htm

  1. Management-focused approach to investigating coastal water-quality drivers and impacts in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Vigouroux, G.; Destouni, G.; Chen, Y.; Bring, A.; Jönsson, A.; Cvetkovic, V.

    2017-12-01

    Coastal areas link human-driven conditions on land with open sea conditions, and include crucial and vulnerable ecosystems that provide a variety of ecosystem services. Eutrophication is a common problem that is not least observed in the Baltic Sea, where coastal water quality is influenced both by land-based nutrient loading and by partly eutrophic open sea conditions. Robust and adaptive management of coastal systems is essential and necessitates integration of large scale catchment-coastal-marine systems as well as consideration of anthropogenic drivers and impacts, and climate change. To address this coastal challenge, relevant methodological approaches are required for characterization of coupled land, local coastal, and open sea conditions under an adaptive management framework for water quality. In this paper we present a new general and scalable dynamic characterization approach, developed for and applied to the Baltic Sea and its coastal areas. A simple carbon-based water quality model is implemented, dividing the Baltic Sea into main management basins that are linked to corresponding hydrological catchments on land, as well as to each other though aggregated three-dimensional marine hydrodynamics. Relevant hydrodynamic variables and associated water quality results have been validated on the Baltic Sea scale and show good accordance with available observation data and other modelling approaches. Based on its scalability, this methodology is further used on coastal zone scale to investigate the effects of hydrodynamic, hydro-climatic and nutrient load drivers on water quality and management implications for coastal areas in the Baltic Sea.

  2. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    DTIC Science & Technology

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  3. Impact of packet losses in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  4. Intranasal Oxytocin Affects Amygdala Functional Connectivity after Trauma Script-Driven Imagery in Distressed Recently Trauma-Exposed Individuals.

    PubMed

    Frijling, Jessie L; van Zuiden, Mirjam; Koch, Saskia B J; Nawijn, Laura; Veltman, Dick J; Olff, Miranda

    2016-04-01

    Approximately 10% of trauma-exposed individuals go on to develop post-traumatic stress disorder (PTSD). Neural emotion regulation may be etiologically involved in PTSD development. Oxytocin administration early post-trauma may be a promising avenue for PTSD prevention, as intranasal oxytocin has previously been found to affect emotion regulation networks in healthy individuals and psychiatric patients. In a randomized double-blind placebo-controlled between-subjects functional magnetic resonance (fMRI) study, we assessed the effects of a single intranasal oxytocin administration (40 IU) on seed-based amygdala resting-state FC with emotion regulation areas (ventromedial prefrontal cortex (vmPFC), ventrolateral prefrontal cortex (vlPFC)), and salience processing areas (insula, dorsal anterior cingulate cortex (dACC)) in 37 individuals within 11 days post trauma. Two resting-state scans were acquired; one after neutral- and one after trauma-script-driven imagery. We found that oxytocin administration reduced amygdala-left vlPFC FC after trauma script-driven imagery, compared with neutral script-driven imagery, whereas in PL-treated participants enhanced amygdala-left vlPFC FC was observed following trauma script-driven imagery. Irrespective of script condition, oxytocin increased amygdala-insula FC and decreased amygdala-vmPFC FC. These neural effects were accompanied by lower levels of sleepiness and higher flashback intensity in the oxytocin group after the trauma script. Together, our findings show that oxytocin administration may impede emotion regulation network functioning in response to trauma reminders in recently trauma-exposed individuals. Therefore, caution may be warranted in administering oxytocin to prevent PTSD in distressed, recently trauma-exposed individuals.

  5. Centralized Fabric Management Using Puppet, Git, and GLPI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William

    2012-12-01

    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).

  6. Addressing scalability while feature requests persist. A look at NASA Worldview's new features and their implementation.

    NASA Astrophysics Data System (ADS)

    King, B. A.

    2017-12-01

    Worldview is a high-traffic web mapping application created using the JavaScript mapping library, OpenLayers. This presentation will primarily focus on three new features: A wrapping component that seamlessly shows satellite imagery over the dateline where most maps either stop or wrap the imagery of the same date. An animation feature that allows users to select date ranges over which they can animate. An A/B comparison feature that gives users the power to compare imagery between dates and layers. In response to an increasingly large codebase caused by ongoing feature requests, Worldview is transitioning to a smaller core codebase comprised of external reusable modules. When creating a module with the intention of having someone else reuse it for a different task, one inherently starts generating code that is easier to read and easier to maintain. This presentation will show demos of these features and cover development techniques used to create them.

  7. A GPU accelerated PDF transparency engine

    NASA Astrophysics Data System (ADS)

    Recker, John; Lin, I.-Jong; Tastl, Ingeborg

    2011-01-01

    As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.

  8. Massively parallel and linear-scaling algorithm for second-order Møller-Plesset perturbation theory applied to the study of supramolecular wires

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul

    2017-03-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.

  9. Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches

    PubMed Central

    Muin, Michael; Fontelo, Paul

    2006-01-01

    Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729

  10. From Provenance Standards and Tools to Queries and Actionable Provenance

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  11. Parallel, Distributed Scripting with Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI librarymore » gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.« less

  12. Optimization of Answer Keys for Script Concordance Testing: Should We Exclude Deviant Panelists, Deviant Responses, or Neither?

    ERIC Educational Resources Information Center

    Gagnon, Robert; Lubarsky, Stuart; Lambert, Carole; Charlin, Bernard

    2011-01-01

    The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool's discriminatory power, but deviant answers could potentially diminish the…

  13. Semantic Memory Organization in Young Children: The Script-Based Categorization of Early Words.

    ERIC Educational Resources Information Center

    Maaka, Margaret J.; Wong, Eddie K.

    This study examined whether scripts provide a basis for the categories preschool children use to structure their semantic memories and whether the use of taxonomies to structure memory becomes more common only after children enter elementary school. Subjects were 108 children in three equal groups of 18 boys and 18 girls children each of 4-, 5-,…

  14. Bilingual Writing as an Act of Identity: Sign-Making in Multiple Scripts

    ERIC Educational Resources Information Center

    Kabuto, Bobbie

    2010-01-01

    This article explores early bilingual script writing as an act of identity. Using multiple theoretical perspectives related to social semiotics and social constructivist perspectives on identity and writing, the research presented in this article is based on a case study of an early biliterate learner of Japanese and English from the ages of 3-7.…

  15. Autonomic correlates of physical and moral disgust.

    PubMed

    Ottaviani, Cristina; Mancini, Francesco; Petrocchi, Nicola; Medea, Barbara; Couyoumdjian, Alessandro

    2013-07-01

    Given that the hypothesis of a common origin of physical and moral disgust has received sparse empirical support, this study aimed to shed light on the subjective and autonomic signatures of these two facets of the same emotional response. Participants (20 men, 20 women) were randomly assigned to physical or moral disgust induction by the use of audio scripts while their electrocardiogram was continuously recorded. Affect ratings were obtained before and after the induction. Time and frequency domain heart rate variability (HRV) measures were obtained. After controlling for disgust sensitivity (DS-R) and obsessive-compulsive (OCI-R) tendencies, both scripts elicited disgust but whereas the physical script elicited a feeling of dirtiness, the moral script evoked more indignation and contempt. The disgust-induced subjective responses were associated with opposite patterns of autonomic reactivity: enhanced activity of the parasympathetic nervous system without concurrent changes in heart rate (HR) for physical disgust and decreased vagal tone and increased HR and autonomic imbalance for moral disgust. Results suggest that immorality relies on the same biological root of physical disgust only in subjects with obsessive compulsive tendencies. Disgust appears to be a heterogeneous response that varies based on the individuals' contamination-based appraisal. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Rapid Production of Internally Structured Colloids by Flash Nanoprecipitation of Block Copolymer Blends.

    PubMed

    Grundy, Lorena S; Lee, Victoria E; Li, Nannan; Sosa, Chris; Mulhearn, William D; Liu, Rui; Register, Richard A; Nikoubashman, Arash; Prud'homme, Robert K; Panagiotopoulos, Athanassios Z; Priestley, Rodney D

    2018-05-08

    Colloids with internally structured geometries have shown great promise in applications ranging from biosensors to optics to drug delivery, where the internal particle structure is paramount to performance. The growing demand for such nanomaterials necessitates the development of a scalable processing platform for their production. Flash nanoprecipitation (FNP), a rapid and inherently scalable colloid precipitation technology, is used to prepare internally structured colloids from blends of block copolymers and homopolymers. As revealed by a combination of experiments and simulations, colloids prepared from different molecular weight diblock copolymers adopt either an ordered lamellar morphology consisting of concentric shells or a disordered lamellar morphology when chain dynamics are sufficiently slow to prevent defect annealing during solvent exchange. Blends of homopolymer and block copolymer in the feed stream generate more complex internally structured colloids, such as those with hierarchically structured Janus and patchy morphologies, due to additional phase separation and kinetic trapping effects. The ability of the FNP process to generate such a wide range of morphologies using a simple and scalable setup provides a pathway to manufacturing internally structured colloids on an industrial scale.

  17. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  18. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  19. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words.

    PubMed

    Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali

    2017-01-01

    Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader's engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers' reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew's rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd ( N = 27) and 5th ( N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition.

  20. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words

    PubMed Central

    Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali

    2018-01-01

    Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27) and 5th (N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition. PMID:29403413

  1. Python-Based Applications for Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.

  2. Phased development of a web-based PACS viewer

    NASA Astrophysics Data System (ADS)

    Gidron, Yoad; Shani, Uri; Shifrin, Mark

    2000-05-01

    The Web browser is an excellent environment for the rapid development of an effective and inexpensive PACS viewer. In this paper we will share our experience in developing a browser-based viewer, from the inception and prototype stages to its current state of maturity. There are many operational advantages to a browser-based viewer, even when native viewers already exist in the system (with multiple and/or high resolution screens): (1) It can be used on existing personal workstations throughout the hospital. (2) It is easy to make the service available from physician's homes. (3) The viewer is extremely portable and platform independent. There is a wide variety of means available for implementing the browser- based viewer. Each file sent to the client by the server can perform some end-user or client/server interaction. These means range from HTML (for HyperText Markup Language) files, through Java Script, to Java applets. Some data types may also invoke plug-in code in the client, although this would reduce the portability of the viewer, it would provide the needed efficiency in critical places. On the server side the range of means is also very rich: (1) A set of files: html, Java Script, Java applets, etc. (2) Extensions of the server via cgi-bin programs, (3) Extensions of the server via servlets, (4) Any other helper application residing and working with the server to access the DICOM archive. The viewer architecture consists of two basic parts: The first part performs query and navigation through the DICOM archive image folders. The second part does the image access and display. While the first part deals with low data traffic, it involves many database transactions. The second part is simple as far as access transactions are concerned, but requires much more data traffic and display functions. Our web-based viewer has gone through three development stages characterized by the complexity of the means and tools employed on both client and server sides.

  3. NMRbot: Python scripts enable high-throughput data collection on current Bruker BioSpin NMR spectrometers.

    PubMed

    Clos, Lawrence J; Jofre, M Fransisca; Ellinger, James J; Westler, William M; Markley, John L

    2013-06-01

    To facilitate the high-throughput acquisition of nuclear magnetic resonance (NMR) experimental data on large sets of samples, we have developed a simple and straightforward automated methodology that capitalizes on recent advances in Bruker BioSpin NMR spectrometer hardware and software. Given the daunting challenge for non-NMR experts to collect quality spectra, our goal was to increase user accessibility, provide customized functionality, and improve the consistency and reliability of resultant data. This methodology, NMRbot, is encoded in a set of scripts written in the Python programming language accessible within the Bruker BioSpin TopSpin ™ software. NMRbot improves automated data acquisition and offers novel tools for use in optimizing experimental parameters on the fly. This automated procedure has been successfully implemented for investigations in metabolomics, small-molecule library profiling, and protein-ligand titrations on four Bruker BioSpin NMR spectrometers at the National Magnetic Resonance Facility at Madison. The investigators reported benefits from ease of setup, improved spectral quality, convenient customizations, and overall time savings.

  4. dREL: a relational expression language for dictionary methods.

    PubMed

    Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R

    2012-08-27

    The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.

  5. GUI implementation of image encryption and decryption using Open CV-Python script on secured TFTP protocol

    NASA Astrophysics Data System (ADS)

    Reddy, K. Rasool; Rao, Ch. Madhava

    2018-04-01

    Currently safety is one of the primary concerns in the transmission of images due to increasing the use of images within the industrial applications. So it's necessary to secure the image facts from unauthorized individuals. There are various strategies are investigated to secure the facts. In that encryption is certainly one of maximum distinguished method. This paper gives a sophisticated Rijndael (AES) algorithm to shield the facts from unauthorized humans. Here Exponential Key Change (EKE) concept is also introduced to exchange the key between client and server. The things are exchange in a network among client and server through a simple protocol is known as Trivial File Transfer Protocol (TFTP). This protocol is used mainly in embedded servers to transfer the data and also provide protection to the data if protection capabilities are integrated. In this paper, implementing a GUI environment for image encryption and decryption. All these experiments carried out on Linux environment the usage of Open CV-Python script.

  6. Shrink-induced silica multiscale structures for enhanced fluorescence from DNA microarrays.

    PubMed

    Sharma, Himanshu; Wood, Jennifer B; Lin, Sophia; Corn, Robert M; Khine, Michelle

    2014-09-23

    We describe a manufacturable and scalable method for fabrication of multiscale wrinkled silica (SiO2) structures on shrink-wrap film to enhance fluorescence signals in DNA fluorescence microarrays. We are able to enhance the fluorescence signal of hybridized DNA by more than 120 fold relative to a planar glass slide. Notably, our substrate has improved detection sensitivity (280 pM) relative to planar glass slide (11 nM). Furthermore, this is accompanied by a 30-45 times improvement in the signal-to-noise ratio (SNR). Unlike metal enhanced fluorescence (MEF) based enhancements, this is a far-field and uniform effect based on surface concentration and photophysical effects from the nano- to microscale SiO2 structures. Notably, the photophysical effects contribute an almost 2.5 fold enhancement over the concentration effects alone. Therefore, this simple and robust method offers an efficient technique to enhance the detection capabilities of fluorescence based DNA microarrays.

  7. Shrink-Induced Silica Multiscale Structures for Enhanced Fluorescence from DNA Microarrays

    PubMed Central

    2015-01-01

    We describe a manufacturable and scalable method for fabrication of multiscale wrinkled silica (SiO2) structures on shrink-wrap film to enhance fluorescence signals in DNA fluorescence microarrays. We are able to enhance the fluorescence signal of hybridized DNA by more than 120 fold relative to a planar glass slide. Notably, our substrate has improved detection sensitivity (280 pM) relative to planar glass slide (11 nM). Furthermore, this is accompanied by a 30–45 times improvement in the signal-to-noise ratio (SNR). Unlike metal enhanced fluorescence (MEF) based enhancements, this is a far-field and uniform effect based on surface concentration and photophysical effects from the nano- to microscale SiO2 structures. Notably, the photophysical effects contribute an almost 2.5 fold enhancement over the concentration effects alone. Therefore, this simple and robust method offers an efficient technique to enhance the detection capabilities of fluorescence based DNA microarrays. PMID:25191785

  8. RGG: A general GUI Framework for R scripts

    PubMed Central

    Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert

    2009-01-01

    Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356

  9. Simplifying and enhancing the use of PyMOL with horizontal scripts

    PubMed Central

    2016-01-01

    Abstract Scripts are used in PyMOL to exert precise control over the appearance of the output and to ease remaking similar images at a later time. We developed horizontal scripts to ease script development. A horizontal script makes a complete scene in PyMOL like a traditional vertical script. The commands in a horizontal script are separated by semicolons. These scripts are edited interactively on the command line with no need for an external text editor. This simpler workflow accelerates script development. In using PyMOL, the illustration of a molecular scene requires an 18‐element matrix of view port settings. The default format spans several lines and is laborious to manually reformat for one line. This default format prevents the fast assembly of horizontal scripts that can reproduce a molecular scene. We solved this problem by writing a function that displays the settings on one line in a compact format suitable for horizontal scripts. We also demonstrate the mapping of aliases to horizontal scripts. Many aliases can be defined in a single script file, which can be useful for applying costume molecular representations to any structure. We also redefined horizontal scripts as Python functions to enable the use of the help function to print documentation about an alias to the command history window. We discuss how these methods of using horizontal scripts both simplify and enhance the use of PyMOL in research and education. PMID:27488983

  10. Access to the NCAR Research Data Archive via the Globus Data Transfer Service

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2014-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2013, 10,000 unique users downloaded greater than 820 terabytes of data from the RDA, and customized data products were prepared for more than 29,000 user-driven requests. In order to further support this increase in web download usage, the RDA is implementing the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for RDA users. This paper highlights the main functionality and usefulness of the Globus data transfer service for accessing the RDA holdings. The Globus data transfer service, developed and supported by the Computation Institute at The University of Chicago and Argonne National Laboratory, uses the GridFTP as a fast, secure, and reliable method for transferring data between two endpoints. A Globus user account is required to use this service, and data transfer endpoints are defined on the Globus web interface. In the RDA use cases, the access endpoint is created on the RDA data server at NCAR. The data user defines the receiving endpoint for the data transfer, which can be the main file system at a host institution, a personal work station, or laptop. Once initiated, the data transfer runs as an unattended background process by Globus, and Globus ensures that the transfer is accurately fulfilled. Users can monitor the data transfer progress on the Globus web interface and optionally receive an email notification once it is complete. Globus also provides a command-line interface to support scripted transfers, which can be useful when embedded in data processing workflows.

  11. Development and application of General Purpose Data Acquisition Shell (GPDAS) at advanced photon source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Youngjoo; Kim, Keeman.

    1991-01-01

    An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digitalmore » I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs.« less

  12. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  13. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01

  14. Sexual scripts among young heterosexually active men and women: continuity and change.

    PubMed

    Masters, N Tatiana; Casey, Erin; Wells, Elizabeth A; Morrison, Diane M

    2013-01-01

    Whereas gendered sexual scripts are hegemonic at the cultural level, research suggests they may be less so at dyadic and individual levels. Understanding "disjunctures" between sexual scripts at different levels holds promise for illuminating mechanisms through which sexual scripts can change. Through interviews with 44 heterosexually active men and women aged 18 to 25, the ways young people grappled with culture-level scripts for sexuality and relationships were delineated. Findings suggest that, although most participants' culture-level gender scripts for behavior in sexual relationships were congruent with descriptions of traditional masculine and feminine sexuality, there was heterogeneity in how or whether these scripts were incorporated into individual relationships. Specifically, three styles of working with sexual scripts were found: conforming, in which personal gender scripts for sexual behavior overlapped with traditional scripts; exception-finding, in which interviewees accepted culture-level gender scripts as a reality, but created exceptions to gender rules for themselves; and transforming, in which participants either attempted to remake culture-level gender scripts or interpreted their own nontraditional styles as equally normative. Changing sexual scripts can potentially contribute to decreased gender inequity in the sexual realm and to increased opportunities for sexual satisfaction, safety, and well-being, particularly for women, but for men as well.

  15. Self-assembly of 3D Carbon Nanotube Sponges: A Simple and Controllable Way to Build Macroscopic and Ultralight Porous Architectures.

    PubMed

    Luo, Shu; Luo, Yufeng; Wu, Hengcai; Li, Mengya; Yan, Lingjia; Jiang, Kaili; Liu, Liang; Li, Qunqing; Fan, Shoushan; Wang, Jiaping

    2017-01-01

    Macroscopic and 3D superaligned CNT (SACNT) sponges are fabricated through a simple, low-cost, controllable, and scalable self-assembly method without using organic binder. Sponges with specific shapes and densities can be achieved. SACNT sponges are ultralight (1-50 mg cm -3 ), highly porous (97.5%-99.9%) with honeycomb-like hierarchical structure, and highly conductive. Using SACNT sponges as templates, various materials with honeycomb-like structure can be obtained for wide applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Scripts or Components? A Comparative Study of Basic Emotion Knowledge in Roma and Non-Roma Children

    ERIC Educational Resources Information Center

    Giménez-Dasí, Marta; Quintanilla, Laura; Lucas-Molina, Beatriz

    2018-01-01

    The basic aspects of emotional comprehension seem to be acquired around the age of 5. However, it is not clear whether children's emotion knowledge is based on facial expression, organized in scripts, or determined by sociocultural context. This study aims to shed some light on these subjects by assessing knowledge of basic emotions in 4- and…

  17. The Development of Videos in Culturally Grounded Drug Prevention for Rural Native Hawaiian Youth

    ERIC Educational Resources Information Center

    Okamoto, Scott K.; Helm, Susana; McClain, Latoya L.; Dinson, Ay-Laina

    2012-01-01

    The purpose of this study was to adapt and validate narrative scripts to be used for the video components of a culturally grounded drug prevention program for rural Native Hawaiian youth. Scripts to be used to film short video vignettes of drug-related problem situations were developed based on a foundation of pre-prevention research funded by the…

  18. Children of the Drum: Equity Pedagogy, Knowledge Construction, and African American Student Learning through Drama.

    ERIC Educational Resources Information Center

    Hanley, Mary Stone

    This paper is an analysis of a project that involved African American middle school students in a drama program that was based on their lives and the stories of their community. Students were trained in performance skills, participated in the development of a script, and then performed the script in local schools. The 10 student participants, 5…

  19. Secure Base Representations in Middle Childhood across Two Western Cultures: Associations with Parental Attachment Representations and Maternal Reports of Behavior Problems

    ERIC Educational Resources Information Center

    Waters, Theodore E. A.; Bosmans, Guy; Vandevivere, Eva; Dujardin, Adinda; Waters, Harriet S.

    2015-01-01

    Recent work examining the content and organization of attachment representations suggests that 1 way in which we represent the attachment relationship is in the form of a cognitive script. This work has largely focused on early childhood or adolescence/adulthood, leaving a large gap in our understanding of script-like attachment representations in…

  20. Teaching a Student with Autism Spectrum Disorder On-Topic Conversational Responses with an iPad: A Pilot Study

    ERIC Educational Resources Information Center

    Sng, Cheong Ying; Carter, Mark; Stephenson, Jennifer

    2017-01-01

    Scripts in written or auditory form have been used to teach conversational skills to individuals with autism spectrum disorder (ASD), but with the proliferation of handheld tablet devices the scope to combine these 2 formats has broadened. The aim of this pilot study was to investigate if a script-based intervention, presented on an iPad…

  1. FNV: light-weight flash-based network and pathway viewer.

    PubMed

    Dannenfelser, Ruth; Lachmann, Alexander; Szenk, Mariola; Ma'ayan, Avi

    2011-04-15

    Network diagrams are commonly used to visualize biochemical pathways by displaying the relationships between genes, proteins, mRNAs, microRNAs, metabolites, regulatory DNA elements, diseases, viruses and drugs. While there are several currently available web-based pathway viewers, there is still room for improvement. To this end, we have developed a flash-based network viewer (FNV) for the visualization of small to moderately sized biological networks and pathways. Written in Adobe ActionScript 3.0, the viewer accepts simple Extensible Markup Language (XML) formatted input files to display pathways in vector graphics on any web-page providing flexible layout options, interactivity with the user through tool tips, hyperlinks and the ability to rearrange nodes on the screen. FNV was utilized as a component in several web-based systems, namely Genes2Networks, Lists2Networks, KEA, ChEA and PathwayGenerator. In addition, FVN can be used to embed pathways inside pdf files for the communication of pathways in soft publication materials. FNV is available for use and download along with the supporting documentation and sample networks at http://www.maayanlab.net/FNV. avi.maayan@mssm.edu.

  2. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    NASA Astrophysics Data System (ADS)

    Sánchez Pineda, A.

    2015-12-01

    We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.

  3. Evaluating a NoSQL Alternative for Chilean Virtual Observatory Services

    NASA Astrophysics Data System (ADS)

    Antognini, J.; Araya, M.; Solar, M.; Valenzuela, C.; Lira, F.

    2015-09-01

    Currently, the standards and protocols for data access in the Virtual Observatory architecture (DAL) are generally implemented with relational databases based on SQL. In particular, the Astronomical Data Query Language (ADQL), language used by IVOA to represent queries to VO services, was created to satisfy the different data access protocols, such as Simple Cone Search. ADQL is based in SQL92, and has extra functionality implemented using PgSphere. An emergent alternative to SQL are the so called NoSQL databases, which can be classified in several categories such as Column, Document, Key-Value, Graph, Object, etc.; each one recommended for different scenarios. Within their notable characteristics we can find: schema-free, easy replication support, simple API, Big Data, etc. The Chilean Virtual Observatory (ChiVO) is developing a functional prototype based on the IVOA architecture, with the following relevant factors: Performance, Scalability, Flexibility, Complexity, and Functionality. Currently, it's very difficult to compare these factors, due to a lack of alternatives. The objective of this paper is to compare NoSQL alternatives with SQL through the implementation of a Web API REST that satisfies ChiVO's needs: a SESAME-style name resolver for the data from ALMA. Therefore, we propose a test scenario by configuring a NoSQL database with data from different sources and evaluating the feasibility of creating a Simple Cone Search service and its performance. This comparison will allow to pave the way for the application of Big Data databases in the Virtual Observatory.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  5. Iridium-catalyst-based autonomous bubble-propelled graphene micromotors with ultralow catalyst loading.

    PubMed

    Wang, Hong; Sofer, Zdeněk; Eng, Alex Yong Sheng; Pumera, Martin

    2014-11-10

    A novel concept of an iridium-based bubble-propelled Janus-particle-type graphene micromotor with very high surface area and with very low catalyst loading is described. The low loading of Ir catalyst (0.54 at %) allows for fast motion of graphene microparticles with high surface area of 316.2 m(2)  g(-1). The micromotor was prepared with a simple and scalable method by thermal exfoliation of iridium-doped graphite oxide precursor composite in hydrogen atmosphere. Oxygen bubbles generated from the decomposition of hydrogen peroxide at the iridium catalytic sites provide robust propulsion thrust for the graphene micromotor. The high surface area and low iridium catalyst loading of the bubble-propelled graphene motors offer great possibilities for dramatically enhanced cargo delivery. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  7. Wrapping up BLAST and other applications for use on Unix clusters.

    PubMed

    Hokamp, Karsten; Shields, Denis C; Wolfe, Kenneth H; Caffrey, Daniel R

    2003-02-12

    We have developed two programs that speed up common bioinformatic applications by spreading them across a UNIX cluster.(1) BLAST.pm, a new module for the 'MOLLUSC' package. (2) WRAPID, a simple tool for parallelizing large numbers of small instances of programs such as BLAST, FASTA and CLUSTALW. The packages were developed in Perl on a 20-node Linux cluster and are provided together with a configuration script and documentation. They can be freely downloaded from http://wolfe.gen.tcd.ie/wrapper.

  8. Scalable Track Detection in SAR CCD Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, James G; Quach, Tu-Thach

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less

  9. Sparks Will Fly: engineering creative script conflicts

    NASA Astrophysics Data System (ADS)

    Veale, Tony; Valitutti, Alessandro

    2017-10-01

    Scripts are often dismissed as the stuff of good movies and bad politics. They codify cultural experience so rigidly that they remove our freedom of choice and become the very antithesis of creativity. Yet, mental scripts have an important role to play in our understanding of creative behaviour, since a deliberate departure from an established script can produce results that are simultaneously novel and familiar, especially when others stick to the conventional script. Indeed, creative opportunities often arise at the overlapping boundaries of two scripts that antagonistically compete to mentally organise the same situation. This work explores the computational integration of competing scripts to generate creative friction in short texts that are surprising but meaningful. Our exploration considers conventional macro-scripts - ordered sequences of actions - and the less obvious micro-scripts that operate at even the lowest levels of language. For the former, we generate plots that squeeze two scripts into a single mini-narrative; for the latter, we generate ironic descriptions that use conflicting scripts to highlight the speaker's pragmatic insincerity. We show experimentally that verbal irony requires both kinds of scripts - macro and micro - to work together to reliably generate creative sparks from a speaker's subversive intent.

  10. Using script theory to cultivate illness script formation and clinical reasoning in health professions education.

    PubMed

    Lubarsky, Stuart; Dory, Valérie; Audétat, Marie-Claude; Custers, Eugène; Charlin, Bernard

    2015-01-01

    Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals' interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called 'illness scripts' that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.

  11. Chemoselective N-arylation of aminobenzamides via copper catalysed Chan-Evans-Lam reactions.

    PubMed

    Liu, Shuai; Zu, Weisai; Zhang, Jinli; Xu, Liang

    2017-11-15

    Chemoselective N-arylation of unprotected aminobenzamides was achieved via Cu-catalysed Chan-Evans-Lam cross-coupling with aryl boronic acids for the first time. Simple copper catalysts enable the selective arylation of amino groups in ortho/meta/para-aminobenzamides under open-flask conditions. The reactions were scalable and compatible with a wide range of functional groups.

  12. Copper(II) mediated facile and ultra fast peptide synthesis in methanol.

    PubMed

    Mali, Sachitanand M; Jadhav, Sandip V; Gopi, Hosahudya N

    2012-07-18

    A novel, ultrafast, mild and scalable amide bond formation strategy in methanol using simple thioacids and amines is described. The mechanism suggests that the coupling reactions are initially mediated by CuSO(4)·5H(2)O and subsequently catalyzed by in situ generated copper sulfide. The pure peptides were isolated in satisfactory yields in less than 5 minutes.

  13. Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket

    NASA Astrophysics Data System (ADS)

    Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.

    2016-10-01

    Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a quantum processor with a (10 ×10 )-qubit lattice, which would allow for the realization of a simple quantum memory.

  14. Rapid and Scalable Plant-based Production of a Cholera Toxin B Subunit Variant to Aid in Mass Vaccination against Cholera Outbreaks

    PubMed Central

    Bennett, Lauren J.; Baldauf, Keegan J.; Kajiura, Hiroyuki; Fujiyama, Kazuhito; Matoba, Nobuyuki

    2013-01-01

    Introduction Cholera toxin B subunit (CTB) is a component of an internationally licensed oral cholera vaccine. The protein induces neutralizing antibodies against the holotoxin, the virulence factor responsible for severe diarrhea. A field clinical trial has suggested that the addition of CTB to killed whole-cell bacteria provides superior short-term protection to whole-cell-only vaccines; however, challenges in CTB biomanufacturing (i.e., cost and scale) hamper its implementation to mass vaccination in developing countries. To provide a potential solution to this issue, we developed a rapid, robust, and scalable CTB production system in plants. Methodology/Principal Findings In a preliminary study of expressing original CTB in transgenic Nicotiana benthamiana, the protein was N-glycosylated with plant-specific glycans. Thus, an aglycosylated CTB variant (pCTB) was created and overexpressed via a plant virus vector. Upon additional transgene engineering for retention in the endoplasmic reticulum and optimization of a secretory signal, the yield of pCTB was dramatically improved, reaching >1 g per kg of fresh leaf material. The protein was efficiently purified by simple two-step chromatography. The GM1-ganglioside binding capacity and conformational stability of pCTB were virtually identical to the bacteria-derived original B subunit, as demonstrated in competitive enzyme-linked immunosorbent assay, surface plasmon resonance, and fluorescence-based thermal shift assay. Mammalian cell surface-binding was corroborated by immunofluorescence and flow cytometry. pCTB exhibited strong oral immunogenicity in mice, inducing significant levels of CTB-specific intestinal antibodies that persisted over 6 months. Moreover, these antibodies effectively neutralized the cholera holotoxin in vitro. Conclusions/Significance Taken together, these results demonstrated that pCTB has robust producibility in Nicotiana plants and retains most, if not all, of major biological activities of the original protein. This rapid and easily scalable system may enable the implementation of pCTB to mass vaccination against outbreaks, thereby providing better protection of high-risk populations in developing countries. PMID:23505583

  15. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  16. Scalable Molecular Dynamics with NAMD

    PubMed Central

    Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus

    2008-01-01

    NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654

  17. PyMOOSE: Interoperable Scripting in Python for MOOSE

    PubMed Central

    Ray, Subhasis; Bhalla, Upinder S.

    2008-01-01

    Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924

  18. Writers Identification Based on Multiple Windows Features Mining

    NASA Astrophysics Data System (ADS)

    Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila

    2016-03-01

    Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.

  19. Southeast Asian palm leaf manuscript images: a review of handwritten text line segmentation methods and new challenges

    NASA Astrophysics Data System (ADS)

    Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc

    2017-01-01

    Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.

  20. Testing for dual brain processing routes in reading: a direct contrast of chinese character and pinyin reading using FMRI.

    PubMed

    Chen, Yiping; Fu, Shimin; Iversen, Susan D; Smith, Steve M; Matthews, Paul M

    2002-10-01

    Chinese offers a unique tool for testing the effects of word form on language processing during reading. The processes of letter-mediated grapheme-to-phoneme translation and phonemic assembly (assembled phonology) critical for reading and spelling in any alphabetic orthography are largely absent when reading nonalphabetic Chinese characters. In contrast, script-to-sound translation based on the script as a whole (addressed phonology) is absent when reading the Chinese alphabetic sound symbols known as pinyin, for which the script-to-sound translation is based exclusively on assembled phonology. The present study aims to contrast patterns of brain activity associated with the different cognitive mechanisms needed for reading the two scripts. fMRI was used with a block design involving a phonological and lexical task in which subjects were asked to decide whether visually presented, paired Chinese characters or pinyin "sounded like" a word. Results demonstrate that reading Chinese characters and pinyin activate a common brain network including the inferior frontal, middle, and inferior temporal gyri, the inferior and superior parietal lobules, and the extrastriate areas. However, some regions show relatively greater activation for either pinyin or Chinese reading. Reading pinyin led to a greater activation in the inferior parietal cortex bilaterally, the precuneus, and the anterior middle temporal gyrus. In contrast, activation in the left fusiform gyrus, the bilateral cuneus, the posterior middle temporal, the right inferior frontal gyrus, and the bilateral superior frontal gyrus were greater for nonalphabetic Chinese reading. We conclude that both alphabetic and nonalphabetic scripts activate a common brain network for reading. Overall, there are no differences in terms of hemispheric specialization between alphabetic and nonalphabetic scripts. However, differences in language surface form appear to determine relative activation in other regions. Some of these regions (e.g., the inferior parietal cortex for pinyin and fusiform gyrus for Chinese characters) are candidate regions for specialized processes associated with reading via predominantly assembled (pinyin) or addressed (Chinese character) procedures.

  1. SSRPrimer and SSR Taxonomy Tree: Biome SSR discovery

    PubMed Central

    Jewell, Erica; Robinson, Andrew; Savage, David; Erwin, Tim; Love, Christopher G.; Lim, Geraldine A. C.; Li, Xi; Batley, Jacqueline; Spangenberg, German C.; Edwards, David

    2006-01-01

    Simple sequence repeat (SSR) molecular genetic markers have become important tools for a broad range of applications such as genome mapping and genetic diversity studies. SSRs are readily identified within DNA sequence data and PCR primers can be designed for their amplification. These PCR primers frequently cross amplify within related species. We report a web-based tool, SSR Primer, that integrates SPUTNIK, an SSR repeat finder, with Primer3, a primer design program, within one pipeline. On submission of multiple FASTA formatted sequences, the script screens each sequence for SSRs using SPUTNIK. Results are then parsed to Primer3 for locus specific primer design. We have applied this tool for the discovery of SSRs within the complete GenBank database, and have designed PCR amplification primers for over 13 million SSRs. The SSR Taxonomy Tree server provides web-based searching and browsing of species and taxa for the visualisation and download of these SSR amplification primers. These tools are available at . PMID:16845092

  2. SSRPrimer and SSR Taxonomy Tree: Biome SSR discovery.

    PubMed

    Jewell, Erica; Robinson, Andrew; Savage, David; Erwin, Tim; Love, Christopher G; Lim, Geraldine A C; Li, Xi; Batley, Jacqueline; Spangenberg, German C; Edwards, David

    2006-07-01

    Simple sequence repeat (SSR) molecular genetic markers have become important tools for a broad range of applications such as genome mapping and genetic diversity studies. SSRs are readily identified within DNA sequence data and PCR primers can be designed for their amplification. These PCR primers frequently cross amplify within related species. We report a web-based tool, SSR Primer, that integrates SPUTNIK, an SSR repeat finder, with Primer3, a primer design program, within one pipeline. On submission of multiple FASTA formatted sequences, the script screens each sequence for SSRs using SPUTNIK. Results are then parsed to Primer3 for locus specific primer design. We have applied this tool for the discovery of SSRs within the complete GenBank database, and have designed PCR amplification primers for over 13 million SSRs. The SSR Taxonomy Tree server provides web-based searching and browsing of species and taxa for the visualisation and download of these SSR amplification primers. These tools are available at http://bioinformatics.pbcbasc.latrobe.edu.au/ssrdiscovery.html.

  3. WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web

    NASA Astrophysics Data System (ADS)

    Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio

    1999-03-01

    We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.

  4. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.

    PubMed

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.

  5. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance

    PubMed Central

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735

  6. The role of scripts in personal consistency and individual differences.

    PubMed

    Demorest, Amy; Popovska, Ana; Dabova, Milena

    2012-02-01

    This article examines the role of scripts in personal consistency and individual differences. Scripts are personally distinctive rules for understanding emotionally significant experiences. In 2 studies, scripts were identified from autobiographical memories of college students (Ns = 47 and 50) using standard categories of events and emotions to derive event-emotion compounds (e.g., Affiliation-Joy). In Study 1, scripts predicted responses to a reaction-time task 1 month later, such that participants responded more quickly to the event from their script when asked to indicate what emotion would be evoked by a series of events. In Study 2, individual differences in 5 common scripts were found to be systematically related to individual differences in traits of the Five-Factor Model. Distinct patterns of correlation revealed the importance of studying events and emotions in compound units, that is, in script form (e.g., Agreeableness was correlated with the script Affiliation-Joy but not with the scripts Fun-Joy or Affiliation-Love). © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.

  7. [Effects of planning and executive functions on young children's script change strategy: A developmental perspective].

    PubMed

    Yanaoka, Kaichi

    2016-02-01

    This research examined the effects of planning and executive functions on young children's (ages 3-to 5-years) strategies in changing scripts. Young children (N = 77) performed a script task (doll task), three executive function tasks (DCCS, red/blue task, and nine box task), a planning task, and a receptive vocabulary task. In the doll task, young children first enacted a "changing clothes" script, and then faced a situation in which some elements of the script were inappropriate. They needed to enact a script by compensating inappropriate items for the other-script items or by changing to the other script in advance. The results showed that shifting, a factor of executive function, had a positive influence on whether young children could compensate inappropriate items. In addition, planning was also an important factor that helped children to change to the other script in advance. These findings suggest that shifting and planning play different roles in using the two strategies appropriately when young children enact scripts in unexpected situations.

  8. Coevolving memetic algorithms: a review and progress report.

    PubMed

    Smith, Jim E

    2007-02-01

    Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.

  9. Network structure of multivariate time series.

    PubMed

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  10. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  11. Adaptive format conversion for scalable video coding

    NASA Astrophysics Data System (ADS)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  12. Engineering metamaterial absorbers from dense gold nanoparticle stacks

    NASA Astrophysics Data System (ADS)

    Hewlett, Sheldon; Mock, Adam

    2017-09-01

    Both ordered and disordered electromagnetic metamaterials have been shown to exhibit interesting and technologically relevant properties that would not be present in the constituent materials in their bulk form. Disordered metamaterials can be fabricated using low-cost and scalable fabrication approaches which are particularly advantageous at the nanoscale. This work shows how a solution-based deposition process can be leveraged to introduce quasi-ordering in disordered gold metamaterials to achieve 94% absorption over the visible spectrum. Full-wave electrodynamic simulations suggest that more advanced structures consistent with this fabrication approach could exhibit 98% average absorption over the entire solar spectrum. We envision this simple and cost-effective fabrication of highly absorbing disordered metamaterials to be of use for thermovoltaics and solar cells.

  13. Preparation of Three-Dimensional Graphene Foams Using Powder Metallurgy Templates.

    PubMed

    Sha, Junwei; Gao, Caitian; Lee, Seoung-Ki; Li, Yilun; Zhao, Naiqin; Tour, James M

    2016-01-26

    A simple and scalable method which combines traditional powder metallurgy and chemical vapor deposition is developed for the synthesis of mesoporous free-standing 3D graphene foams. The powder metallurgy templates for 3D graphene foams (PMT-GFs) consist of particle-like carbon shells which are connected by multilayered graphene that shows high specific surface area (1080 m(2) g(-1)), good crystallization, good electrical conductivity (13.8 S cm(-1)), and a mechanically robust structure. The PMT-GFs did not break under direct flushing with DI water, and they were able to recover after being compressed. These properties indicate promising applications of PMT-GFs for fields requiring 3D carbon frameworks such as in energy-based electrodes and mechanical dampening.

  14. LUMA: A many-core, Fluid-Structure Interaction solver based on the Lattice-Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Harwood, Adrian R. G.; O'Connor, Joseph; Sanchez Muñoz, Jonathan; Camps Santasmasas, Marta; Revell, Alistair J.

    2018-01-01

    The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid-structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.

  15. Convergent optical wired and wireless long-reach access network using high spectral-efficient modulation.

    PubMed

    Chow, C W; Lin, Y H

    2012-04-09

    To provide broadband services in a single and low cost perform, the convergent optical wired and wireless access network is promising. Here, we propose and demonstrate a convergent optical wired and wireless long-reach access networks based on orthogonal wavelength division multiplexing (WDM). Both the baseband signal and the radio-over-fiber (ROF) signal are multiplexed and de-multiplexed in optical domain, hence it is simple and the operation speed is not limited by the electronic bottleneck caused by the digital signal processing (DSP). Error-free de-multiplexing and down-conversion can be achieved for all the signals after 60 km (long-reach) fiber transmission. The scalability of the system for higher bit-rate (60 GHz) is also simulated and discussed.

  16. Como preparar un programa de informacion sobre la asistencia economica (Planning a Financial Aid Awareness Program).

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC.

    This booklet, written in Spanish, is intended to be used with a set of slides as part of a presentation to students on "How To Apply for Federal Student Aid" ("Como Solicitar la Asistencia Economica Federal para Estudiantes"). The first part of the book is a script based on the slides. After the script is a guide to hosting a financial aid…

  17. Experimental design and data-analysis in label-free quantitative LC/MS proteomics: A tutorial with MSqRob.

    PubMed

    Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven

    2018-01-16

    Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Development of an E-Prime Based Computer Simulation of an Interactive Human Rights Violation Negotiation Script (Developpement d’un Programme de Simulation par Ordinateur Fonde sur le Logiciel E Prime pour la Negociation Interactive en cas de Violation des Droits de la Personne)

    DTIC Science & Technology

    2010-12-01

    Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An

  19. A cluster randomized controlled trial of a brief tobacco cessation intervention for low-income communities in India: study protocol.

    PubMed

    Sarkar, Bidyut K; Shahab, Lion; Arora, Monika; Lorencatto, Fabiana; Reddy, K Srinath; West, Robert

    2014-03-01

    India has 275 million adult tobacco users and tobacco use is estimated to contribute to more than a million deaths in the country each year. There is an urgent need to develop and evaluate affordable, practicable and scalable interventions to promote cessation of tobacco use. Because tobacco use is so harmful, an increase of as little as 1 percentage point in long-term quit success rates can have an important public health impact. This protocol paper describes the rationale and methods of a large randomized controlled trial which aims to evaluate the effectiveness of a brief scalable smoking cessation intervention delivered by trained health professionals as an outreach programme in poor urban communities in India. This is a pragmatic, two-arm, community-based cluster randomized controlled trial focused on tobacco users in low-income communities. The treatment arm is a brief intervention comprising brief advice including training in craving control using simple yogic breathing exercises (BA-YBA) and the control arm is very brief advice (VBA). Of a total of 32 clusters, 16 will be allocated to the intervention arm and 16 to the control arm. Each cluster will have 31 participants, making a total of 992 participants. The primary outcome measure will follow the Russell Standard: self-report of sustained abstinence for at least 6 months following the intervention confirmed at the final follow-up by salivary cotinine. This trial will inform national and international policy on delivery of scalable and affordable brief outreach interventions to promote tobacco use cessation in low resource settings where tobacco users have limited access to physicians and medications. © 2014 Society for the Study of Addiction.

  20. Thin-film copper indium gallium selenide solar cell based on low-temperature all-printing process.

    PubMed

    Singh, Manjeet; Jiu, Jinting; Sugahara, Tohru; Suganuma, Katsuaki

    2014-09-24

    In the solar cell field, development of simple, low-cost, and low-temperature fabrication processes has become an important trend for energy-saving and environmental issues. Copper indium gallium selenide (CIGS) solar cells have attracted much attention due to the high absorption coefficient, tunable band gap energy, and high efficiency. However, vacuum and high-temperature processing in fabrication of solar cells have limited the applications. There is a strong need to develop simple and scalable methods. In this work, a CIGS solar cell based on all printing steps and low-temperature annealing is developed. CIGS absorber thin film is deposited by using dodecylamine-stabilized CIGS nanoparticle ink followed by printing buffer layer. Silver nanowire (AgNW) ink and sol-gel-derived ZnO precursor solution are used to prepare a highly conductive window layer ZnO/[AgNW/ZnO] electrode with a printing method that achieves 16 Ω/sq sheet resistance and 94% transparency. A CIGS solar cell based on all printing processes exhibits efficiency of 1.6% with open circuit voltage of 0.48 V, short circuit current density of 9.7 mA/cm(2), and fill factor of 0.34 for 200 nm thick CIGS film, fabricated under ambient conditions and annealed at 250 °C.

  1. Tracking delays in report availability caused by incorrect exam status with Web-based issue tracking: a quality initiative.

    PubMed

    Awan, Omer Abdulrehman; van Wagenberg, Frans; Daly, Mark; Safdar, Nabile; Nagy, Paul

    2011-04-01

    Many radiology information systems (RIS) cannot accept a final report from a dictation reporting system before the exam has been completed in the RIS by a technologist. A radiologist can still render a report in a reporting system once images are available, but the RIS and ancillary systems may not get the results because of the study's uncompleted status. This delay in completing the study caused an alarming number of delayed reports and was undetected by conventional RIS reporting techniques. We developed a Web-based reporting tool to monitor uncompleted exams and automatically page section supervisors when a report was being delayed by its incomplete status in the RIS. Institutional Review Board exemption was obtained. At four imaging centers, a Python script was developed to poll the dictation system every 10 min for exams in five different modalities that were signed by the radiologist but could not be sent to the RIS. This script logged the exams into an existing Web-based tracking tool using PHP and a MySQL database. The script also text-paged the modality supervisor. The script logged the time at which the report was finally sent, and statistics were aggregated onto a separate Web-based reporting tool. Over a 1-year period, the average number of uncompleted exams per month and time to problem resolution decreased at every imaging center and in almost every imaging modality. Automated feedback provides a vital link in improving technologist performance and patient care without assigning a human resource to manage report queues.

  2. The Mechanics of CSCL Macro Scripts

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre; Hong, Fabrice

    2008-01-01

    Macro scripts structure collaborative learning and foster the emergence of knowledge-productive interactions such as argumentation, explanations and mutual regulation. We propose a pedagogical model for the designing of scripts and illustrate this model using three scripts. In brief, a script disturbs the natural convergence of a team and in doing…

  3. Script Reforms--Are They Necessary?

    ERIC Educational Resources Information Center

    James, Gregory

    Script reform, the modification of an existing writing system, is often confused with script replacement of one writing system with another. Turkish underwent the replacement of Arabic script by an adaptation of Roman script under Kamel Ataturk, but a similar replacement in Persian was rejected because of the high rate of existing literacy in…

  4. The Departmental Script as an Ongoing Conversation into the Phronesis of Teaching Science as Inquiry

    NASA Astrophysics Data System (ADS)

    Melville, Wayne; Campbell, Todd; Fazio, Xavier; Bartley, Anthony

    2012-12-01

    This article investigates the extent to which a science department script supports the teaching and learning of science as inquiry and how this script is translated into individual teachers' classrooms. This study was completed at one school in Canada which, since 2000, has developed a departmental script supportive of teaching and learning of science as inquiry. Through a mixed-method strategy, multiple data sources were drawn together to inform a cohesive narrative about scripts, science departments, and individual classrooms. Results of the study reveal three important findings: (1) the departmental script is not an artefact, but instead is an ongoing conversation into the episteme, techne and phronesis of science teaching; (2) the consistently reformed teaching practices that were observed lead us to believe that a departmental script has the capacity to enhance the teaching of science as inquiry; and, (3) the existence of a departmental script does not mean that teaching will be `standardized' in the bureaucratic sense of the word. Our findings indicate that a departmental script can be considered to concurrently operate as an epistemic script that is translated consistently across the classes, and a social script that was more open to interpretation within individual teachers' classrooms.

  5. Page segmentation using script identification vectors: A first look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Cannon, M.; Kelly, P.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less

  6. Living in history and living by the cultural life script: How older Germans date their autobiographical memories.

    PubMed

    Bohn, Annette; Habermas, Tilmann

    2016-01-01

    This study examines predictions from two theories on the organisation of autobiographical memory: Cultural Life Script Theory which conceptualises the organisation of autobiographical memory by cultural schemata, and Transition Theory which proposes that people organise their memories in relation to personal events that changed the fabric of their daily lives, or in relation to negative collective public transitions, called the Living-in-History effect. Predictions from both theories were tested in forty-eight-old Germans from Berlin and Northern Germany. We tested whether the Living-in-History effect exists for both negative (the Second World War) and positive (Fall of Berlin Wall) collectively experienced events, and whether cultural life script events serve as a prominent strategy to date personal memories. Results showed a powerful, long-lasting Living-in History effect for the negative, but not the positive event. Berlin participants dated 26% of their memories in relation to the Second World War. Supporting cultural life script theory, life script events were frequently used to date personal memories. This provides evidence that people use a combination of culturally transmitted knowledge and knowledge based on personal experience to navigate through their autobiographical memories, and that experiencing war has a lasting impact on the organisation of autobiographical memories across the life span.

  7. Trainable multiscript orientation detection

    NASA Astrophysics Data System (ADS)

    Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.

    2010-01-01

    Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.

  8. Green and scalable production of colloidal perovskite nanocrystals and transparent sols by a controlled self-collection process

    NASA Astrophysics Data System (ADS)

    Liu, Shuangyi; Huang, Limin; Li, Wanlu; Liu, Xiaohua; Jing, Shui; Li, Jackie; O'Brien, Stephen

    2015-07-01

    Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity.Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02351c

  9. Grid Generation Techniques Utilizing the Volume Grid Manipulator

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1998-01-01

    This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.

  10. Scalable and Axiomatic Ranking of Network Role Similarity

    PubMed Central

    Jin, Ruoming; Lee, Victor E.; Li, Longjie

    2014-01-01

    A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes according to how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithms known for graph automorphism are nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the structural or link-based similarity problem that SimRank addresses. However, SimRank and other existing similarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a new similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all these axiomatic properties. We also introduce Iceberg RoleSim, a scalable algorithm which discovers all pairs with RoleSim scores above a user-defined threshold θ. We demonstrate the interpretative power of RoleSim on both both synthetic and real datasets. PMID:25383066

  11. Graphene Inks with Cellulosic Dispersants: Development and Applications for Printed Electronics

    NASA Astrophysics Data System (ADS)

    Secor, Ethan Benjamin

    Graphene offers promising opportunities for applications in printed and flexible electronic devices due to its high electrical and thermal conductivity, mechanical flexibility and strength, and chemical and environmental stability. However, scalable production and processing of graphene presents a critical technological challenge preventing the application of graphene for flexible electronic interconnects, electrochemical energy storage, and chemically robust electrical contacts. In this thesis, a promising and versatile platform for the production, patterning, and application of graphene inks is presented based on cellulosic dispersants. Graphene is produced from flake graphite using scalable liquid-phase exfoliation methods, using the polymers ethyl cellulose and nitrocellulose as multifunctional dispersing agents. These cellulose derivatives offer high colloidal stability and broadly tunable rheology for graphene dispersions, providing an effective and tunable platform for graphene ink development. Thermal or photonic annealing decomposes the polymer dispersant to yield high conductivity, flexible graphene patterns for various electronics applications. In particular, the chemical stability of graphene enables robust electrical contacts for ceramic, metallic, organic and electrolytic materials, validating the diverse applicability of graphene in printed electronics. Overall, the strategy for graphene ink design presented here offers a simple, efficient, and versatile method for integrating graphene in a wide range of printed devices and systems, providing both fundamental insight for nanomaterial ink development and realistic opportunities for practical applications.

  12. Intense visible emission from ZnO/PAAX (X = H or Na) nanocomposite synthesized via a simple and scalable sol-gel method

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Apostoluk, A.; Gautier, P.; Valette, A.; Omar, L.; Cornier, T.; Bluet, J. M.; Masenelli-Varlot, K.; Daniele, S.; Masenelli, B.

    2016-03-01

    Intense visible nano-emitters are key objects for many technologies such as single photon source, bio-labels or energy convertors. Chalcogenide nanocrystals have ruled this domain for several decades. However, there is a demand for cheaper and less toxic materials. In this scheme, ZnO nanoparticles have appeared as potential candidates. At the nanoscale, they exhibit crystalline defects which can generate intense visible emission. However, even though photoluminescence quantum yields as high as 60% have been reported, it still remains to get quantum yield of that order of magnitude which remains stable over a long period. In this purpose, we present hybrid ZnO/polyacrylic acid (PAAH) nanocomposites, obtained from the hydrolysis of diethylzinc in presence of PAAH, exhibiting quantum yield systematically larger than 20%. By optimizing the nature and properties of the polymeric acid, the quantum yield is increased up to 70% and remains stable over months. This enhancement is explained by a model based on the hybrid type II heterostructure formed by ZnO/PAAH. The addition of PAAX (X = H or Na) during the hydrolysis of ZnEt2 represents a cost effective method to synthesize scalable amounts of highly luminescent ZnO/PAAX nanocomposites.

  13. Scuba: scalable kernel-based gene prioritization.

    PubMed

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  14. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button

    PubMed Central

    2010-01-01

    Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979

  15. Dr.LiTHO: a development and research lithography simulator

    NASA Astrophysics Data System (ADS)

    Fühner, Tim; Schnattinger, Thomas; Ardelean, Gheorghe; Erdmann, Andreas

    2007-03-01

    This paper introduces Dr.LiTHO, a research and development oriented lithography simulation environment developed at Fraunhofer IISB to flexibly integrate our simulation models into one coherent platform. We propose a light-weight approach to a lithography simulation environment: The use of a scripting (batch) language as an integration platform. Out of the great variety of different scripting languages, Python proved superior in many ways: It exhibits a good-natured learning-curve, it is efficient, available on virtually any platform, and provides sophisticated integration mechanisms for existing programs. In this paper, we will describe the steps, required to provide Python bindings for existing programs and to finally generate an integrated simulation environment. In addition, we will give a short introduction into selected software design demands associated with the development of such a framework. We will especially focus on testing and (both technical and user-oriented) documentation issues. Dr.LiTHO Python files contain not only all simulation parameter settings but also the simulation flow, providing maximum flexibility. In addition to relatively simple batch jobs, repetitive tasks can be pooled in libraries. And as Python is a full-blown programming language, users can add virtually any functionality, which is especially useful in the scope of simulation studies or optimization tasks, that often require masses of evaluations. Furthermore, we will give a short overview of the numerous existing Python packages. Several examples demonstrate the feasibility and productiveness of integrating Python packages into custom Dr.LiTHO scripts.

  16. [Script crossing in scanning electron microscopy].

    PubMed

    Oehmichen, M; von Kortzfleisch, D; Hegner, B

    1989-01-01

    A case of mixed script in which ball point-pen ink was contaminated with typewriting prompted a survey of the literature and a systematic SEM study of mixed script with various writing instruments or inks. Mixed scripts produced with the following instruments or inks were investigated: pencil, ink/India ink, ball-pint pen, felt-tip pen, copied script and typewriter. This investigation showed SEM to be the method of choice for visualizing overlying scripts produced by different writing instruments or inks.

  17. Formatting scripts with computers and Extended BASIC.

    PubMed

    Menning, C B

    1984-02-01

    A computer program, written in the language of Extended BASIC, is presented which enables scripts, for educational media, to be quickly written in a nearly unformatted style. From the resulting script file, stored on magnetic tape or disk, the computer program formats the script into either a storyboard , a presentation, or a narrator 's script. Script headings and page and paragraph numbers are automatic features in the word processing. Suggestions are given for making personal modifications to the computer program.

  18. Efficient generation of perfluoroalkyl radicals from sodium perfluoroalkanesulfinates and a hypervalent iodine(iii) reagent: mild, metal-free synthesis of perfluoroalkylated organic molecules.

    PubMed

    Sakamoto, Ryu; Kashiwagi, Hirotaka; Selvakumar, Sermadurai; Moteki, Shin A; Maruoka, Keiji

    2016-07-06

    This article describes an efficient method for the introduction of perfluoroalkyl groups into N-acrylamides, 2-isocyanides, olefins, and other heterocycles using perfluoroalkyl radicals that were generated from the reaction between sodium perfluoroalkanesulfinates and a hypervalent iodine(iii) reagent. This approach represents a simple, scalable perfluoroalkylation method under mild and metal-free conditions.

  19. Poly(oligoethylene glycol methacrylate) dip-coating: turning cellulose paper into a protein-repellent platform for biosensors.

    PubMed

    Deng, Xudong; Smeets, Niels M B; Sicard, Clémence; Wang, Jingyun; Brennan, John D; Filipe, Carlos D M; Hoare, Todd

    2014-09-17

    The passivation of nonspecific protein adsorption to paper is a major barrier to the use of paper as a platform for microfluidic bioassays. Herein we describe a simple, scalable protocol based on adsorption and cross-linking of poly(oligoethylene glycol methacrylate) (POEGMA) derivatives that reduces nonspecific adsorption of a range of proteins to filter paper by at least 1 order of magnitude without significantly changing the fiber morphology or paper macroporosity. A lateral-flow test strip coated with POEGMA facilitates effective protein transport while also confining the colorimetric reporting signal for easier detection, giving improved performance relative to bovine serum albumin (BSA)-blocked paper. Enzyme-linked immunosorbent assays based on POEGMA-coated paper also achieve lower blank values, higher sensitivities, and lower detection limits relative to ones based on paper blocked with BSA or skim milk. We anticipate that POEGMA-coated paper can function as a platform for the design of portable, disposable, and low-cost paper-based biosensors.

  20. Designing of smart home automation system based on Raspberry Pi

    NASA Astrophysics Data System (ADS)

    Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn

    2016-03-01

    Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning "on" and "off" of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.

Top