Sample records for open source digital

  1. Open Access, Open Source and Digital Libraries: A Current Trend in University Libraries around the World

    ERIC Educational Resources Information Center

    Krishnamurthy, M.

    2008-01-01

    Purpose: The purpose of this paper is to describe the open access and open source movement in the digital library world. Design/methodology/approach: A review of key developments in the open access and open source movement is provided. Findings: Open source software and open access to research findings are of great use to scholars in developing…

  2. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  3. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  4. Open Source Software and the Intellectual Commons.

    ERIC Educational Resources Information Center

    Dorman, David

    2002-01-01

    Discusses the Open Source Software method of software development and its relationship to control over information content. Topics include digital library resources; reference services; preservation; the legal and economic status of information; technical standards; access to digital data; control of information use; and copyright and patent laws.…

  5. DigitalVHI--a freeware open-source software application to capture the Voice Handicap Index and other questionnaire data in various languages.

    PubMed

    Herbst, Christian T; Oh, Jinook; Vydrová, Jitka; Švec, Jan G

    2015-07-01

    In this short report we introduce DigitalVHI, a free open-source software application for obtaining Voice Handicap Index (VHI) and other questionnaire data, which can be put on a computer in clinics and used in clinical practice. The software can simplify performing clinical studies since it makes the VHI scores directly available for analysis in a digital form. It can be downloaded from http://www.christian-herbst.org/DigitalVHI/.

  6. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  7. Preliminary geologic map of the Piru 7.5' quadrangle, southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1995). More specific information about the units may be available in the original sources.

  8. Open Standards, Open Source, and Open Innovation: Harnessing the Benefits of Openness

    ERIC Educational Resources Information Center

    Committee for Economic Development, 2006

    2006-01-01

    Digitization of information and the Internet have profoundly expanded the capacity for openness. This report details the benefits of openness in three areas--open standards, open-source software, and open innovation--and examines the major issues in the debate over whether openness should be encouraged or not. The report explains each of these…

  9. Improving Software Sustainability: Lessons Learned from Profiles in Science.

    PubMed

    Gallagher, Marie E

    2013-01-01

    The Profiles in Science® digital library features digitized surrogates of historical items selected from the archival collections of the U.S. National Library of Medicine as well as collaborating institutions. In addition, it contains a database of descriptive, technical and administrative metadata. It also contains various software components that allow creation of the metadata, management of the digital items, and access to the items and metadata through the Profiles in Science Web site [1]. The choices made building the digital library were designed to maximize the sustainability and long-term survival of all of the components of the digital library [2]. For example, selecting standard and open digital file formats rather than proprietary formats increases the sustainability of the digital files [3]. Correspondingly, using non-proprietary software may improve the sustainability of the software--either through in-house expertise or through the open source community. Limiting our digital library software exclusively to open source software or to software developed in-house has not been feasible. For example, we have used proprietary operating systems, scanning software, a search engine, and office productivity software. We did this when either lack of essential capabilities or the cost-benefit trade-off favored using proprietary software. We also did so knowing that in the future we would need to replace or upgrade some of our proprietary software, analogous to migrating from an obsolete digital file format to a new format as the technological landscape changes. Since our digital library's start in 1998, all of its software has been upgraded or replaced, but the digitized items have not yet required migration to other formats. Technological changes that compelled us to replace proprietary software included the cost of product licensing, product support, incompatibility with other software, prohibited use due to evolving security policies, and product abandonment. Sometimes these changes happen on short notice, so we continually monitor our library's software for signs of endangerment. We have attempted to replace proprietary software with suitable in-house or open source software. When the replacement involves a standalone piece of software with a nearly equivalent version, such as replacing a commercial HTTP server with an open source HTTP server, the replacement is straightforward. Recently we replaced software that functioned not only as our search engine but also as the backbone of the architecture of our Web site. In this paper, we describe the lessons learned and the pros and cons of replacing this software with open source software.

  10. The Setup Phase of Project Open Book: A Report to the Commission on Preservation and Access on the Status of an Effort to Convert Microfilm to Digital Imagery.

    ERIC Educational Resources Information Center

    Conway, Paul; Weaver, Shari

    1994-01-01

    This report documents the second phase of Yale University's Project Open Book, which explored the uses of digital technology for preservation of and access to deteriorating documents. Highlights include preconditions for project implementation; quality digital conversion; characteristics of source materials; digital document indexing; workflow…

  11. Cyberscience and the Knowledge-Based Economy. Open Access and Trade Publishing: From Contradiction to Compatibility with Non-Exclusive Copyright Licensing

    ERIC Educational Resources Information Center

    Armbruster, Chris

    2008-01-01

    Open source, open content and open access are set to fundamentally alter the conditions of knowledge production and distribution. Open source, open content and open access are also the most tangible result of the shift towards e-science and digital networking. Yet, widespread misperceptions exist about the impact of this shift on knowledge…

  12. Better value digital health: the medium, the market and the role of openness.

    PubMed

    Reynolds, Carl J

    2013-08-01

    The recent NHS 'hack days' have showcased the enthusiasm and talent of the junior doctor body as well as the potential of open source, open governance and small-medium enterprise. There still remains much scope for developing better value digital health services within the NHS. This article sets out the current state of NHS information technology (IT), how it fails to meet the needs of patients and professionals alike and suggests how better value digital health can be achieved.

  13. FIA: An Open Forensic Integration Architecture for Composing Digital Evidence

    NASA Astrophysics Data System (ADS)

    Raghavan, Sriram; Clark, Andrew; Mohay, George

    The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.

  14. From Oss CAD to Bim for Cultural Heritage Digital Representation

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Stylianidis, E.

    2017-02-01

    The paper illustrates the use of open source Computer-aided design (CAD) environments in order to develop Building Information Modelling (BIM) tools able to manage 3D models in the field of cultural heritage. Nowadays, the development of Free and Open Source Software (FOSS) has been rapidly growing and their use tends to be consolidated. Although BIM technology is widely known and used, there is a lack of integrated open source platforms able to support all stages of Historic Building Information Modelling (HBIM) processes. The present research aims to use a FOSS CAD environment in order to develop BIM plug-ins which will be able to import and edit digital representations of cultural heritage models derived by photogrammetric methods.

  15. Open Education and the Open Science Economy

    ERIC Educational Resources Information Center

    Peters, Michael A.

    2009-01-01

    Openness as a complex code word for a variety of digital trends and movements has emerged as an alternative mode of "social production" based on the growing and overlapping complexities of open source, open access, open archiving, open publishing, and open science. This paper argues that the openness movement with its reinforcing structure of…

  16. Managing Digital Archives Using Open Source Software Tools

    NASA Astrophysics Data System (ADS)

    Barve, S.; Dongare, S.

    2007-10-01

    This paper describes the use of open source software tools such as MySQL and PHP for creating database-backed websites. Such websites offer many advantages over ones built from static HTML pages. This paper will discuss how OSS tools are used and their benefits, and after the successful implementation of these tools how the library took the initiative in implementing an institutional repository using DSpace open source software.

  17. The Digital Slide Archive: A Software Platform for Management, Integration, and Analysis of Histology for Cancer Research.

    PubMed

    Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D

    2017-11-01

    Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.

  18. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    DOE PAGES

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...

    2015-07-08

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.

  19. Public Access to Digital Material; A Call to Researchers: Digital Libraries Need Collaboration across Disciplines; Greenstone: Open-Source Digital Library Software; Retrieval Issues for the Colorado Digitization Project's Heritage Database; Report on the 5th European Conference on Digital Libraries, ECDL 2001; Report on the First Joint Conference on Digital Libraries.

    ERIC Educational Resources Information Center

    Kahle, Brewster; Prelinger, Rick; Jackson, Mary E.; Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.; Witten, Ian H.; Bainbridge, David; Boddie, Stefan J.; Garrison, William A.; Cunningham, Sally Jo; Borgman, Christine L.; Hessel, Heather

    2001-01-01

    These six articles discuss various issues relating to digital libraries. Highlights include public access to digital materials; intellectual property concerns; the need for collaboration across disciplines; Greenstone software for construction and presentation of digital information collections; the Colorado Digitization Project; and conferences…

  20. "WWW.MDTF.ORG": a World Wide Web forum for developing open-architecture, freely distributed, digital teaching file software by participant consensus.

    PubMed

    Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R

    2001-06-01

    To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.

  1. Edaq530: A Transparent, Open-End and Open-Source Measurement Solution in Natural Science Education

    ERIC Educational Resources Information Center

    Kopasz, Katalin; Makra, Peter; Gingl, Zoltan

    2011-01-01

    We present Edaq530, a low-cost, compact and easy-to-use digital measurement solution consisting of a thumb-sized USB-to-sensor interface and measurement software. The solution is fully open-source, our aim being to provide a viable alternative to professional solutions. Our main focus in designing Edaq530 has been versatility and transparency. In…

  2. We Started a Digital Collection for next to Nothing and You Can Too

    ERIC Educational Resources Information Center

    Northam, Adam

    2010-01-01

    In this article, the author shares the successful digitization effort of their library and demonstrates how they were able to expand their first digital collection. The author started working at James G. Gee Library when the director asked him to try digital collections and was asked to study an open source collection management program called…

  3. Open Source in Higher Education: Towards an Understanding of Networked Universities

    ERIC Educational Resources Information Center

    Quint-Rapoport, Mia

    2012-01-01

    This article addresses the question of understanding more about networked universities by looking at open source software developers working in academic contexts. It sketches their identities and work as an emerging professional community that both relies upon and develops digitally mediated networks and contributes to the progress of academic…

  4. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  5. All-digital signal-processing open-loop fiber-optic gyroscope with enlarged dynamic range.

    PubMed

    Wang, Qin; Yang, Chuanchuan; Wang, Xinyue; Wang, Ziyu

    2013-12-15

    We propose and realize a new open-loop fiber-optic gyroscope (FOG) with an all-digital signal-processing (DSP) system where an all-digital phase-locked loop is employed for digital demodulation to eliminate the variation of the source intensity and suppress the bias drift. A Sagnac phase-shift tracking method is proposed to enlarge the dynamic range, and, with its aid, a new open-loop FOG, which can achieve a large dynamic range and high sensitivity at the same time, is realized. The experimental results show that compared with the conventional open-loop FOG with the same fiber coil and optical devices, the proposed FOG reduces the bias instability from 0.259 to 0.018 deg/h, and the angle random walk from 0.031 to 0.006 deg/h(1/2), moreover, enlarges the dynamic range to ±360 deg/s, exceeding the maximum dynamic range ±63 deg/s of the conventional open-loop FOG.

  6. Digital time stamping system based on open source technologies.

    PubMed

    Miskinis, Rimantas; Smirnov, Dmitrij; Urba, Emilis; Burokas, Andrius; Malysko, Bogdan; Laud, Peeter; Zuliani, Francesco

    2010-03-01

    A digital time stamping system based on open source technologies (LINUX-UBUNTU, OpenTSA, OpenSSL, MySQL) is described in detail, including all important testing results. The system, called BALTICTIME, was developed under a project sponsored by the European Commission under the Program FP 6. It was designed to meet the requirements posed to the systems of legal and accountable time stamping and to be applicable to the hardware commonly used by the national time metrology laboratories. The BALTICTIME system is intended for the use of governmental and other institutions as well as personal bodies. Testing results demonstrate that the time stamps issued to the user by BALTICTIME and saved in BALTICTIME's archives (which implies that the time stamps are accountable) meet all the regulatory requirements. Moreover, the BALTICTIME in its present implementation is able to issue more than 10 digital time stamps per second. The system can be enhanced if needed. The test version of the BALTICTIME service is free and available at http://baltictime. pfi.lt:8080/btws/ and http://baltictime.lnmc.lv:8080/btws/.

  7. MULTIPLE INPUT BINARY ADDER EMPLOYING MAGNETIC DRUM DIGITAL COMPUTING APPARATUS

    DOEpatents

    Cooke-Yarborough, E.H.

    1960-12-01

    A digital computing apparatus is described for adding a plurality of multi-digit binary numbers. The apparatus comprises a rotating magnetic drum, a recording head, first and second reading heads disposed adjacent to the first and second recording tracks, and a series of timing signals recorded on the first track. A series of N groups of digit-representing signals is delivered to the recording head at time intervals corresponding to the timing signals, each group consisting of digits of the same significance in the numbers, and the signal series is recorded on the second track of the drum in synchronism with the timing signals on the first track. The multistage registers are stepped cyclically through all positions, and each of the multistage registers is coupled to the control lead of a separate gate circuit to open the corresponding gate at only one selected position in each cycle. One of the gates has its input coupled to the bistable element to receive the sum digit, and the output lead of this gate is coupled to the recording device. The inputs of the other gates receive the digits to be added from the second reading head, and the outputs of these gates are coupled to the adding register. A phase-setting pulse source is connected to each of the multistage registers individually to step the multistage registers to different initial positions in the cycle, and the phase-setting pulse source is actuated each N time interval to shift a sum digit to the bistable element, where the multistage register coupled to bistable element is operated by the phase- setting pulse source to that position in its cycle N steps before opening the first gate, so that this gate opens in synchronism with each of the shifts to pass the sum digits to the recording head.

  8. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  9. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    PubMed Central

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J; Schuster, Heiko; Ternette, Nicola; Alpízar, Adán; Schittenhelm, Ralf B; Ramarathinam, Sri H; Lindestam Arlehamn, Cecilia S; Chiek Koh, Ching; Gillet, Ludovic C; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David S; Deutsch, Eric W; Moritz, Robert L; Purcell, Anthony W; Rammensee, Hans-Georg; Stevanovic, Stefan; Aebersold, Ruedi

    2015-01-01

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies. DOI: http://dx.doi.org/10.7554/eLife.07661.001 PMID:26154972

  10. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  11. Free and Open Source Tools (FOSTs): An Empirical Investigation of Pre-Service Teachers' Competencies, Attitudes, and Pedagogical Intentions

    ERIC Educational Resources Information Center

    Asing-Cashman, Joyce G.; Gurung, Binod; Limbu, Yam B.; Rutledge, David

    2014-01-01

    This study examines the digital native pre-service teachers' (DNPSTs) perceptions of their competency, attitude, and pedagogical intention to use free and open source tools (FOSTs) in their future teaching. Participants were 294 PSTs who responded to pre-course surveys at the beginning of an educational technology course. Using the structural…

  12. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    PubMed Central

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446

  13. Digital Textbooks. Research Brief

    ERIC Educational Resources Information Center

    Johnston, Howard

    2011-01-01

    Despite their growing popularity, digital alternatives to conventional textbooks are stirring up controversy. With the introduction of tablet computers, and the growing trend toward "cloud computing" and "open source" software, the trend is accelerating because costs are coming down and free or inexpensive materials are becoming more available.…

  14. Stepping Stones for People with Cognitive Disabilities and Low Digital Literacy.

    PubMed

    Lee, Steve

    2017-01-01

    The open source components presented have been designed for use by developers creating applications for people with cognitive disabilities or low digital literacy. They provide easy access to common online activities and include configurable levels of complexity to address varying preferences.

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. Towards Networked Knowledge: The Learning Registry, an Infrastructure for Sharing Online Learning Resources

    ERIC Educational Resources Information Center

    Lee, Ashley; Hobson, Joe; Bienkowski, Marie; Midgley, Steve; Currier, Sarah; Campbell, Lorna M.; Novoselova, Tatiana

    2012-01-01

    In this article, the authors describe an open-source, open-data digital infrastructure for sharing information about open educational resources (OERs) across disparate systems and platforms. The Learning Registry, which began as a project funded by the U.S. Departments of Education and Defense, currently has an active international community…

  17. Digital beacon receiver for ionospheric TEC measurement developed with GNU Radio

    NASA Astrophysics Data System (ADS)

    Yamamoto, M.

    2008-11-01

    A simple digital receiver named GNU Radio Beacon Receiver (GRBR) was developed for the satellite-ground beacon experiment to measure the ionospheric total electron content (TEC). The open-source software toolkit for the software defined radio, GNU Radio, is utilized to realize the basic function of the receiver and perform fast signal processing. The software is written in Python for a LINUX PC. The open-source hardware called Universal Software Radio Peripheral (USRP), which best matches the GNU Radio, is used as a front-end to acquire the satellite beacon signals of 150 and 400 MHz. The first experiment was successful as results from GRBR showed very good agreement to those from the co-located analog beacon receiver. Detailed design information and software codes are open at the URL http://www.rish.kyoto-u.ac.jp/digitalbeacon/.

  18. Framework for Analytic Cognition (FAC): A Guide for Doing All-Source Intelligence Analysis

    DTIC Science & Technology

    2011-12-01

    humans as rational decision makers has been thoroughly discounted in the last decade. Recent research in neuroscience and cognitive psychology has...Intelligence and Counterintelligence, Vol. 18, No. 2, 2005, p. 206. 60 Moore, D.T. & Krizan, L. "Intelligence Analysis: Does NSA have what it Takes...SIGINT NSA Online TS/SCI Online Digital Yes COMINT Internet None N/A Unclassified Online Digital Yes Open Source STRATFOR Local information

  19. Profiling Students' Multiple Source Use by Question Type

    ERIC Educational Resources Information Center

    List, Alexandra; Grossnickle, Emily M.; Alexander, Patricia A.

    2016-01-01

    The present study examined undergraduate students' multiple source use in response to two different types of academic questions, one discrete and one open-ended. Participants (N = 240) responded to two questions using a library of eight digital sources, varying in source type (e.g., newspaper article) and reliability (e.g., authors' credentials).…

  20. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    PubMed

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  1. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Punnoose, Ratish J.; Armstrong, Robert C.; Wong, Matthew H.

    Formal methods have come into wide use because of their effectiveness in verifying "safety and security" requirements of digital systems; a set of requirements for which testing is mostly ineffective. Formal methods are routinely used in the design and verification of high-consequence digital systems in industry. This report outlines our work in assessing the capabilities of commercial and open source formal tools and the ways in which they can be leveraged in digital design workflows.

  3. An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned

    NASA Technical Reports Server (NTRS)

    Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)

    2000-01-01

    This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.

  4. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    PubMed Central

    Boulos, Maged N Kamel; Honda, Kiyoshi

    2006-01-01

    Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699

  5. Inexpensive Open-Source Data Logging in the Field

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2013-12-01

    I present a general-purpose open-source field-capable data logger, which provides a mechanism to develop dense networks of inexpensive environmental sensors. This data logger was developed as a low-power variant of the Arduino open-source development system, and is named the ALog ("Arduino Logger") BottleLogger (it is slim enough to fit inside a Nalgene water bottle) version 1.0. It features an integrated high-precision real-time clock, SD card slot for high-volume data storage, and integrated power switching. The ALog can interface with sensors via six analog/digital pins, two digital pins, and one digital interrupt pin that can read event-based inputs, such as those from a tipping-bucket rain gauge. We have successfully tested the ALog BottleLogger with ultrasonic rangefinders (for water stage and snow accumulation and melt), temperature sensors, tipping-bucket rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that it triggers based on events. The source code for the ALog, including functions to interface with a a range of commercially-available sensors, is provided as an Arduino C++ library with example implementations. All schematics, circuit board layouts, and source code files are open-source and freely available under GNU GPL v3.0 and Creative Commons Attribution-ShareAlike 3.0 Unported licenses. Through this work, we hope to foster a community-driven movement to collect field environmental data on a budget that permits citizen-scientists and researchers from low-income countries to collect the same high-quality data as researchers in wealthy countries. These data can provide information about global change to managers, governments, scientists, and interested citizens worldwide. Watertight box with ALog BottleLogger data logger on the left and battery pack with 3 D cells on the right. Data can be collected for 3-5 years on one set of batteries.

  6. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  7. Pemphigus

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  8. Vitiligo

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  9. Rosacea

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  10. Bursitis

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  11. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  12. Rapid Optical Shutter, Chopper, Modulator and Deflector

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M. (Inventor)

    2017-01-01

    An optical device with a light source and a detector is provided. A digital micromirror device positioned between the detector and the light source may deflect light beams projected from the light source. An aperture in front of the detector may block an incoming light beam from the detector when the incoming light beam is incident on the detector outside of a passable incident range and including an aperture opening configured to pass the incoming light beam to the detector when the incoming light beam is incident on the detector within a passable incident range. The digital micromirror device may rotate between a first position causing the light beam to pass through the aperture opening and a second position causing the light beam to be blocked by the aperture. The optical device may be configured to operate as a shutter, chopper, modulator and/or deflector.

  13. Resurrecting Legacy Code Using Ontosoft Knowledge-Sharing and Digital Object Management to Revitalize and Reproduce Software for Groundwater Management Research

    NASA Astrophysics Data System (ADS)

    Kwon, N.; Gentle, J.; Pierce, S. A.

    2015-12-01

    Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.

  14. Ankylosing Spondylitis

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  15. Pachyonychia Congenita

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  16. radR: an open-source platform for acquiring and analysing data on biological targets observed by surveillance radar.

    PubMed

    Taylor, Philip D; Brzustowski, John M; Matkovich, Carolyn; Peckford, Michael L; Wilson, Dave

    2010-10-26

    Radar has been used for decades to study movement of insects, birds and bats. In spite of this, there are few readily available software tools for the acquisition, storage and processing of such data. Program radR was developed to solve this problem. Program radR is an open source software tool for the acquisition, storage and analysis of data from marine radars operating in surveillance mode. radR takes time series data with a two-dimensional spatial component as input from some source (typically a radar digitizing card) and extracts and retains information of biological relevance (i.e. moving targets). Low-level data processing is implemented in "C" code, but user-defined functions written in the "R" statistical programming language can be called at pre-defined steps in the calculations. Output data formats are designed to allow for future inclusion of additional data items without requiring change to C code. Two brands of radar digitizing card are currently supported as data sources. We also provide an overview of the basic considerations of setting up and running a biological radar study. Program radR provides a convenient, open source platform for the acquisition and analysis of radar data of biological targets.

  17. radR: an open-source platform for acquiring and analysing data on biological targets observed by surveillance radar

    PubMed Central

    2010-01-01

    Background Radar has been used for decades to study movement of insects, birds and bats. In spite of this, there are few readily available software tools for the acquisition, storage and processing of such data. Program radR was developed to solve this problem. Results Program radR is an open source software tool for the acquisition, storage and analysis of data from marine radars operating in surveillance mode. radR takes time series data with a two-dimensional spatial component as input from some source (typically a radar digitizing card) and extracts and retains information of biological relevance (i.e. moving targets). Low-level data processing is implemented in "C" code, but user-defined functions written in the "R" statistical programming language can be called at pre-defined steps in the calculations. Output data formats are designed to allow for future inclusion of additional data items without requiring change to C code. Two brands of radar digitizing card are currently supported as data sources. We also provide an overview of the basic considerations of setting up and running a biological radar study. Conclusions Program radR provides a convenient, open source platform for the acquisition and analysis of radar data of biological targets. PMID:20977735

  18. Open Source Digital Image Management Took Us from Raging Rivers to Quiet Waters

    ERIC Educational Resources Information Center

    Dunlap, Isaac Hunter

    2005-01-01

    In this article, the author describes his experience when Kathy Nichols contacted him seeking suggestions, recommendations--really anything he could think of--that might help her seize control of a bewildering and rapidly surging torrent of digital image files. The challenges that Nichols described interested him, and he felt they might be…

  19. Understanding Autoimmune Diseases

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  20. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  1. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    PubMed Central

    Choudhry, Priya

    2016-01-01

    Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849

  2. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  3. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  4. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  5. Public Data Set: Impedance of an Intense Plasma-Cathode Electron Source for Tokamak Plasma Startup

    DOE Data Explorer

    Hinson, Edward T. [University of Wisconsin-Madison] (ORCID:000000019713140X); Barr, Jayson L. [University of Wisconsin-Madison] (ORCID:0000000177685931); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Perry, Justin M. [University of Wisconsin-Madison] (ORCID:0000000171228609)

    2016-05-31

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in E.T. Hinson et al., 'Impedance of an Intense Plasma-Cathode Electron Source for Tokamak Plasma Startup,' Physics of Plasmas 23, 052515 (2016).

  6. What Is Juvenile Arthritis?

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  7. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  8. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  9. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  10. Free and open source software for the manipulation of digital images.

    PubMed

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  11. Technology Literacy and the MySpace Generation: They're Not Asking Permission

    ERIC Educational Resources Information Center

    McLester, Susan

    2007-01-01

    As open source and other participatory Web venues become the norm in the new century, educators will be facing an even more overwhelming technology learning curve. A new digital divide is in the future--one that is largely generational. At its heart will be the fundamental questions of what "school" really means and whether digital immigrants can…

  12. Polymyalgia Rheumatica and Giant Cell Arteritis

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  13. Open-source, community-driven microfluidics with Metafluidics.

    PubMed

    Kong, David S; Thorsen, Todd A; Babb, Jonathan; Wick, Scott T; Gam, Jeremy J; Weiss, Ron; Carr, Peter A

    2017-06-07

    Microfluidic devices have the potential to automate and miniaturize biological experiments, but open-source sharing of device designs has lagged behind sharing of other resources such as software. Synthetic biologists have used microfluidics for DNA assembly, cell-free expression, and cell culture, but a combination of expense, device complexity, and reliance on custom set-ups hampers their widespread adoption. We present Metafluidics, an open-source, community-driven repository that hosts digital design files, assembly specifications, and open-source software to enable users to build, configure, and operate a microfluidic device. We use Metafluidics to share designs and fabrication instructions for both a microfluidic ring-mixer device and a 32-channel tabletop microfluidic controller. This device and controller are applied to build genetic circuits using standard DNA assembly methods including ligation, Gateway, Gibson, and Golden Gate. Metafluidics is intended to enable a broad community of engineers, DIY enthusiasts, and other nontraditional participants with limited fabrication skills to contribute to microfluidic research.

  14. Open-source mobile digital platform for clinical trial data collection in low-resource settings.

    PubMed

    van Dam, Joris; Omondi Onyango, Kevin; Midamba, Brian; Groosman, Nele; Hooper, Norman; Spector, Jonathan; Pillai, Goonaseelan Colin; Ogutu, Bernhards

    2017-02-01

    Governments, universities and pan-African research networks are building durable infrastructure and capabilities for biomedical research in Africa. This offers the opportunity to adopt from the outset innovative approaches and technologies that would be challenging to retrofit into fully established research infrastructures such as those regularly found in high-income countries. In this context we piloted the use of a novel mobile digital health platform, designed specifically for low-resource environments, to support high-quality data collection in a clinical research study. Our primary aim was to assess the feasibility of a using a mobile digital platform for clinical trial data collection in a low-resource setting. Secondarily, we sought to explore the potential benefits of such an approach. The investigative site was a research institute in Nairobi, Kenya. We integrated an open-source platform for mobile data collection commonly used in the developing world with an open-source, standard platform for electronic data capture in clinical trials. The integration was developed using common data standards (Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model), maximising the potential to extend the approach to other platforms. The system was deployed in a pharmacokinetic study involving healthy human volunteers. The electronic data collection platform successfully supported conduct of the study. Multidisciplinary users reported high levels of satisfaction with the mobile application and highlighted substantial advantages when compared with traditional paper record systems. The new system also demonstrated a potential for expediting data quality review. This pilot study demonstrated the feasibility of using a mobile digital platform for clinical research data collection in low-resource settings. Sustainable scientific capabilities and infrastructure are essential to attract and support clinical research studies. Since many research structures in Africa are being developed anew, stakeholders should consider implementing innovative technologies and approaches.

  15. Trends in the Evolution of the Public Web, 1998-2002; The Fedora Project: An Open-source Digital Object Repository Management System; State of the Dublin Core Metadata Initiative, April 2003; Preservation Metadata; How Many People Search the ERIC Database Each Day?

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.

    2003-01-01

    Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…

  16. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  17. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  18. Digital geologic map and GIS database of Venezuela

    USGS Publications Warehouse

    Garrity, Christopher P.; Hackley, Paul C.; Urbani, Franco

    2006-01-01

    The digital geologic map and GIS database of Venezuela captures GIS compatible geologic and hydrologic data from the 'Geologic Shaded Relief Map of Venezuela,' which was released online as U.S. Geological Survey Open-File Report 2005-1038. Digital datasets and corresponding metadata files are stored in ESRI geodatabase format; accessible via ArcGIS 9.X. Feature classes in the geodatabase include geologic unit polygons, open water polygons, coincident geologic unit linework (contacts, faults, etc.) and non-coincident geologic unit linework (folds, drainage networks, etc.). Geologic unit polygon data were attributed for age, name, and lithologic type following the Lexico Estratigrafico de Venezuela. All digital datasets were captured from source data at 1:750,000. Although users may view and analyze data at varying scales, the authors make no guarantee as to the accuracy of the data at scales larger than 1:750,000.

  19. ImTK: an open source multi-center information management toolkit

    NASA Astrophysics Data System (ADS)

    Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.

    2008-03-01

    The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.

  20. The Role of Free/Libre and Open Source Software in Learning Health Systems.

    PubMed

    Paton, C; Karopka, T

    2017-08-01

    Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.

  1. Joint Replacement Surgery: Health Information Basics for You and Your Family

    MedlinePlus

    ... information on research progress in these diseases. Contact Us NIAMS Archive Viewers and Players Social Media Moderation Policy FOIA Privacy Statement Accessibility Disclaimer Digital Strategy Open Source Data Public Data Listing NIH... ...

  2. Sparking Innovative Learning & Creativity. 2007 NMC Summer Conference Proceedings (Indianapolis, IN, Jun 6-9, 2007)

    ERIC Educational Resources Information Center

    Smith, Rachel S., Ed.

    2007-01-01

    The conference proceedings include the following papers: (1) The Arts Metaverse in Open Croquet: Exploring an Open Source 3-D Online Digital World (Ulrich Rauch and Tim Wang); (2) Beyond World of Warcraft: the Universe of MMOGs (Ruben R. Puentedura); (3) ClevelandPlus in Second Life (Wendy Shapiro, Lev Gonick, and Sue Shick); (4) Folksemantic:…

  3. Herschel Observations of Protostellar and Young Stellar Objects in Nearby Molecular Clouds: The DIGIT Open Time Key Project

    NASA Astrophysics Data System (ADS)

    Green, Joel D.; DIGIT OTKP Team

    2010-01-01

    The DIGIT (Dust, Ice, and Gas In Time) Open Time Key Project utilizes the PACS spectrometer (57-210 um) onboard the Herschel Space Observatory to study the colder regions of young stellar objects and protostellar cores, complementary to recent observations from Spitzer and ground-based observatories. DIGIT focuses on 30 embedded sources and 64 disk sources, and includes supporting photometry from PACS and SPIRE, as well as spectroscopy from HIFI, selected from nearby molecular clouds. For the embedded sources, PACS spectroscopy will allow us to address the origin of [CI] and high-J CO lines observed with ISO-LWS. Our observations are sensitive to the presence of cold crystalline water ice, diopside, and carbonates. Additionally, PACS scans are 5x5 maps of the embedded sources and their outflows. Observations of more evolved disk sources will sample low and intermediate mass objects as well as a variety of spectral types from A to M. Many of these sources are extremely rich in mid-IR crystalline dust features, enabling us to test whether similar features can be detected at larger radii, via colder dust emission at longer wavelengths. If processed grains are present only in the inner disk (in the case of full disks) or from the emitting wall surface which marks the outer edge of the gap (in the case of transitional disks), there must be short timescales for dust processing; if processed grains are detected in the outer disk, radial transport must be rapid and efficient. Weak bands of forsterite and clino- and ortho-enstatite in the 60-75 um range provide information about the conditions under which these materials were formed. For the Science Demonstration Phase we are observing an embedded protostar (DK Cha) and a Herbig Ae/Be star (HD 100546), exemplars of the kind of science that DIGIT will achieve over the full program.

  4. Chemotion ELN: an Open Source electronic lab notebook for chemists in academia.

    PubMed

    Tremouilhac, Pierre; Nguyen, An; Huang, Yu-Chieh; Kotov, Serhii; Lütjohann, Dominic Sebastian; Hübsch, Florian; Jung, Nicole; Bräse, Stefan

    2017-09-25

    The development of an electronic lab notebook (ELN) for researchers working in the field of chemical sciences is presented. The web based application is available as an Open Source software that offers modern solutions for chemical researchers. The Chemotion ELN is equipped with the basic functionalities necessary for the acquisition and processing of chemical data, in particular the work with molecular structures and calculations based on molecular properties. The ELN supports planning, description, storage, and management for the routine work of organic chemists. It also provides tools for communicating and sharing the recorded research data among colleagues. Meeting the requirements of a state of the art research infrastructure, the ELN allows the search for molecules and reactions not only within the user's data but also in conventional external sources as provided by SciFinder and PubChem. The presented development makes allowance for the growing dependency of scientific activity on the availability of digital information by providing Open Source instruments to record and reuse research data. The current version of the ELN has been using for over half of a year in our chemistry research group, serves as a common infrastructure for chemistry research and enables chemistry researchers to build their own databases of digital information as a prerequisite for the detailed, systematic investigation and evaluation of chemical reactions and mechanisms.

  5. A new chapter in environmental sensing: The Open-Source Published Environmental Sensing (OPENS) laboratory

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.

    2015-12-01

    The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele, Fatuzzo; Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is amore » set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.« less

  7. Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle

    NASA Astrophysics Data System (ADS)

    Vinay, S.; Downs, R. R.

    2012-12-01

    Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.

  8. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for Biogeochemical Dynamics (ORNL DAAC).

  9. Internet Research, Uncensored

    ERIC Educational Resources Information Center

    Kean, Sam

    2007-01-01

    In this article, the author discusses a computer program called Psiphon which bypasses government filters undetected. The University of Toronto's Citizen Lab, a research center for digital media and politics, designed Psiphon for technology-savvy activists. Some technology-savvy activists use other open-source software, like Tor (which relies on…

  10. Open Ephys: an open-source, plugin-based platform for multichannel electrophysiology.

    PubMed

    Siegle, Joshua H; López, Aarón Cuevas; Patel, Yogi A; Abramov, Kirill; Ohayon, Shay; Voigts, Jakob

    2017-08-01

    Closed-loop experiments, in which causal interventions are conditioned on the state of the system under investigation, have become increasingly common in neuroscience. Such experiments can have a high degree of explanatory power, but they require a precise implementation that can be difficult to replicate across laboratories. We sought to overcome this limitation by building open-source software that makes it easier to develop and share algorithms for closed-loop control. We created the Open Ephys GUI, an open-source platform for multichannel electrophysiology experiments. In addition to the standard 'open-loop' visualization and recording functionality, the GUI also includes modules for delivering feedback in response to events detected in the incoming data stream. Importantly, these modules can be built and shared as plugins, which makes it possible for users to extend the functionality of the GUI through a simple API, without having to understand the inner workings of the entire application. In combination with low-cost, open-source hardware for amplifying and digitizing neural signals, the GUI has been used for closed-loop experiments that perturb the hippocampal theta rhythm in a phase-specific manner. The Open Ephys GUI is the first widely used application for multichannel electrophysiology that leverages a plugin-based workflow. We hope that it will lower the barrier to entry for electrophysiologists who wish to incorporate real-time feedback into their research.

  11. Open Ephys: an open-source, plugin-based platform for multichannel electrophysiology

    NASA Astrophysics Data System (ADS)

    Siegle, Joshua H.; Cuevas López, Aarón; Patel, Yogi A.; Abramov, Kirill; Ohayon, Shay; Voigts, Jakob

    2017-08-01

    Objective. Closed-loop experiments, in which causal interventions are conditioned on the state of the system under investigation, have become increasingly common in neuroscience. Such experiments can have a high degree of explanatory power, but they require a precise implementation that can be difficult to replicate across laboratories. We sought to overcome this limitation by building open-source software that makes it easier to develop and share algorithms for closed-loop control. Approach. We created the Open Ephys GUI, an open-source platform for multichannel electrophysiology experiments. In addition to the standard ‘open-loop’ visualization and recording functionality, the GUI also includes modules for delivering feedback in response to events detected in the incoming data stream. Importantly, these modules can be built and shared as plugins, which makes it possible for users to extend the functionality of the GUI through a simple API, without having to understand the inner workings of the entire application. Main results. In combination with low-cost, open-source hardware for amplifying and digitizing neural signals, the GUI has been used for closed-loop experiments that perturb the hippocampal theta rhythm in a phase-specific manner. Significance. The Open Ephys GUI is the first widely used application for multichannel electrophysiology that leverages a plugin-based workflow. We hope that it will lower the barrier to entry for electrophysiologists who wish to incorporate real-time feedback into their research.

  12. A Technology Enhanced Learning Model for Quality Education

    NASA Astrophysics Data System (ADS)

    Sherly, Elizabeth; Uddin, Md. Meraj

    Technology Enhanced Learning and Teaching (TELT) Model provides learning through collaborations and interactions with a framework for content development and collaborative knowledge sharing system as a supplementary for learning to improve the quality of education system. TELT deals with a unique pedagogy model for Technology Enhanced Learning System which includes course management system, digital library, multimedia enriched contents and video lectures, open content management system and collaboration and knowledge sharing systems. Open sources like Moodle and Wiki for content development, video on demand solution with a low cost mid range system, an exhaustive digital library are provided in a portal system. The paper depicts a case study of e-learning initiatives with TELT model at IIITM-K and how effectively implemented.

  13. Development open source microcontroller based temperature data logger

    NASA Astrophysics Data System (ADS)

    Abdullah, M. H.; Che Ghani, S. A.; Zaulkafilai, Z.; Tajuddin, S. N.

    2017-10-01

    This article discusses the development stages in designing, prototyping, testing and deploying a portable open source microcontroller based temperature data logger for use in rough industrial environment. The 5V powered prototype of data logger is equipped with open source Arduino microcontroller for integrating multiple thermocouple sensors with their module, secure digital (SD) card storage, liquid crystal display (LCD), real time clock and electronic enclosure made of acrylic. The program for the function of the datalogger is programmed so that 8 readings from the thermocouples can be acquired within 3 s interval and displayed on the LCD simultaneously. The recorded temperature readings at four different points on both hydrodistillation show similar profile pattern and highest yield of extracted oil was achieved on hydrodistillation 2 at 0.004%. From the obtained results, this study achieved the objective of developing an inexpensive, portable and robust eight channels temperature measuring module with capabilities to monitor and store real time data.

  14. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  15. THE CDF ARCHIVE: HERSCHEL PACS AND SPIRE SPECTROSCOPIC DATA PIPELINE AND PRODUCTS FOR PROTOSTARS AND YOUNG STELLAR OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Joel D.; Yang, Yao-Lun; II, Neal J. Evans

    2016-03-15

    We present the COPS-DIGIT-FOOSH (CDF) Herschel spectroscopy data product archive, and related ancillary data products, along with data fidelity assessments, and a user-created archive in collaboration with the Herschel-PACS and SPIRE ICC groups. Our products include datacubes, contour maps, automated line fitting results, and best 1D spectra products for all protostellar and disk sources observed with PACS in RangeScan mode for two observing programs: the DIGIT Open Time Key Program (KPOT-nevans-1 and SDP-nevans-1; PI: N. Evans), and the FOOSH Open Time Program (OT1-jgreen02-2; PI: J. Green). In addition, we provide our best SPIRE-FTS spectroscopic products for the COPS Open Time Program (OT2-jgreen02-6;more » PI: J. Green) and FOOSH sources. We include details of data processing, descriptions of output products, and tests of their reliability for user applications. We identify the parts of the data set to be used with caution. The resulting absolute flux calibration has improved in almost all cases. Compared to previous reductions, the resulting rotational temperatures and numbers of CO molecules have changed substantially in some sources. On average, however, the rotational temperatures have not changed substantially (<2%), but the number of warm (T{sub rot} ∼ 300 K) CO molecules has increased by about 18%.« less

  16. The CDF Archive: Herschel PACS and SPIRE Spectroscopic Data Pipeline and Products for Protostars and Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Green, Joel D.; Yang, Yao-Lun; Evans, Neal J., II; Karska, Agata; Herczeg, Gregory; van Dishoeck, Ewine F.; Lee, Jeong-Eun; Larson, Rebecca L.; Bouwman, Jeroen

    2016-03-01

    We present the COPS-DIGIT-FOOSH (CDF) Herschel spectroscopy data product archive, and related ancillary data products, along with data fidelity assessments, and a user-created archive in collaboration with the Herschel-PACS and SPIRE ICC groups. Our products include datacubes, contour maps, automated line fitting results, and best 1D spectra products for all protostellar and disk sources observed with PACS in RangeScan mode for two observing programs: the DIGIT Open Time Key Program (KPOT_nevans1 and SDP_nevans_1; PI: N. Evans), and the FOOSH Open Time Program (OT1_jgreen02_2; PI: J. Green). In addition, we provide our best SPIRE-FTS spectroscopic products for the COPS Open Time Program (OT2_jgreen02_6; PI: J. Green) and FOOSH sources. We include details of data processing, descriptions of output products, and tests of their reliability for user applications. We identify the parts of the data set to be used with caution. The resulting absolute flux calibration has improved in almost all cases. Compared to previous reductions, the resulting rotational temperatures and numbers of CO molecules have changed substantially in some sources. On average, however, the rotational temperatures have not changed substantially (<2%), but the number of warm (Trot ∼ 300 K) CO molecules has increased by about 18%.

  17. Big Data Meets Physics Education Research: From MOOCs to University-Led High School Programs

    NASA Astrophysics Data System (ADS)

    Seaton, Daniel

    2017-01-01

    The Massive Open Online Course (MOOC) movement has catalyzed discussions of digital learning on campuses around the world and highlighted the increasingly large, complex datasets related to learning. Physics Education Research can and should play a key role in measuring outcomes of this most recent wave of digital education. In this talk, I will discuss big data and learning analytics through multiple modes of teaching and learning enabled by the open-source edX platform: open-online, flipped, and blended. Open-Online learning will be described through analysis of MOOC offerings from Harvard and MIT, where 2.5 million unique users have led to 9 million enrollments across nearly 300 courses. Flipped instruction will be discussed through an Advanced Placement program at Davidson College that empowers high school teachers to use AP aligned, MOOC content directly in their classrooms with only their students. Analysis of this program will be highlighted, including results from a pilot study showing a positive correlation between content usage and externally validated AP exam scores. Lastly, blended learning will be discussed through specific residential use cases at Davidson College and MIT, highlighting unique course models that blend open-online and residential experiences. My hope for this talk is that listeners will better understand the current wave of digital education and the opportunities it provides for data-driven teaching and learning.

  18. HELI-DEM portal for geo-processing services

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Antonovic, Milan; Molinari, Monia

    2014-05-01

    HELI-DEM (Helvetia-Italy Digital Elevation Model) is a project developed in the framework of Italy/Switzerland Operational Programme for Trans-frontier Cooperation 2007-2013 whose major aim is to create a unified digital terrain model that includes the alpine and sub-alpine areas between Italy and Switzerland. The partners of the project are: Lombardy Region, Piedmont Region, Polytechnic of Milan, Polytechnic of Turin and Fondazione Politecnico from Italy; Institute of Earth Sciences (SUPSI) from Switzerland. The digital terrain model has been produced by integrating and validating the different elevation data available for the areas of interest, characterized by different reference frame, resolutions and accuracies: DHM at 25 m resolution from Swisstopo, DTM at 20 m resolution from Lombardy Region, DTM at 5 m resolution from Piedmont Region and DTM LiDAR PST-A at about 1 m resolution, that covers the main river bed areas and is produced by the Italian Ministry of the Environment. Further results of the project are: the generation of a unique Italian Swiss geoid with an accuracy of few centimeters (Gilardoni et al. 2012); the establishment of a GNSS permanent network, prototype of a transnational positioning service; the development of a geo-portal, entirely based on open source technologies and open standards, which provides the cross-border DTM and offers some capabilities of analysis and processing through the Internet. With this talk, the authors want to present the main steps of the project with a focus on the HELI-DEM geo-portal development carried out by the Institute of Earth Sciences, which is the access point to the DTM outputted from the project. The portal, accessible at http://geoservice.ist.supsi.ch/helidem, is a demonstration of open source technologies combined for providing access to geospatial functionalities to wide non GIS expert public. In fact, the system is entirely developed using only Open Standards and Free and Open Source Software (FOSS) both on the server side (services) and on the client side (interface). In addition to self developed code the system relies mainly on teh software GRASS 7 [1], ZOO-project [2], Geoserver [3] and OpenLayers [4] and the standards WMS [5], WCS [6] and WPS [7]. At the time of writing, the portal offers features like profiling, contour extraction, watershed delineation and analysis, derivatives calculation, data extraction, coordinate conversion but it is evolving and it is planned to extend to a series of environmental modeling that the IST developed in the past like dam break simulation, landslide run-out estimation and floods due to landslide impact in artificial basins. [1] Neteler M., Mitasova H., Open Source GIS: A GRASS GIS Approach. 3rd Ed. 406 pp, Springer, New York, 2008. [2] Fenoy G., Bozon N., Raghavan V., ZOO Project: The Open Wps Platform. Proceeding of 1st International Workshop on Pervasive Web Mapping, Geoprocessing and Services (WebMGS). Como, http://www.isprs.org/proceedings/XXXVIII/4-W13/ID_32.pdf, 26-27 agosto 2010. [3] Giannecchini S., Aime A., GeoServer, il server open source per la gestione interoperabile dei dati geospaziali. Atti 15a Conferenza Nazionale ASITA. Reggia di Colorno, 15-18 novembre 2011. [4] Perez A.S., OpenLayers Cookbook. Packt Publishing, 2012. ISBN 1849517843. [5] OGC, OpenGIS Web Map Server Implementation Specification, http://www.opengeospatial.org/standards/wms, 2006. [6] OGC, OGC WCS 2.0 Interface Standard - Core, http://portal.opengeospatial.org/files/?artifact_id=41437, 2010b. [7] OGC, OpenGIS Web Processing Service, http://portal.opengeospatial.org/files/?artifact_id=24151, 2007.

  19. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  20. Open-Source Wax RepRap 3-D Printer for Rapid Prototyping Paper-Based Microfluidics.

    PubMed

    Pearce, J M; Anzalone, N C; Heldt, C L

    2016-08-01

    The open-source release of self-replicating rapid prototypers (RepRaps) has created a rich opportunity for low-cost distributed digital fabrication of complex 3-D objects such as scientific equipment. For example, 3-D printable reactionware devices offer the opportunity to combine open hardware microfluidic handling with lab-on-a-chip reactionware to radically reduce costs and increase the number and complexity of microfluidic applications. To further drive down the cost while improving the performance of lab-on-a-chip paper-based microfluidic prototyping, this study reports on the development of a RepRap upgrade capable of converting a Prusa Mendel RepRap into a wax 3-D printer for paper-based microfluidic applications. An open-source hardware approach is used to demonstrate a 3-D printable upgrade for the 3-D printer, which combines a heated syringe pump with the RepRap/Arduino 3-D control. The bill of materials, designs, basic assembly, and use instructions are provided, along with a completely free and open-source software tool chain. The open-source hardware device described here accelerates the potential of the nascent field of electrochemical detection combined with paper-based microfluidics by dropping the marginal cost of prototyping to nearly zero while accelerating the turnover between paper-based microfluidic designs. © 2016 Society for Laboratory Automation and Screening.

  1. Vector-Based Ground Surface and Object Representation Using Cameras

    DTIC Science & Technology

    2009-12-01

    representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both

  2. Effect of Loss on Multiplexed Single-Photon Sources (Open Access Publisher’s Version)

    DTIC Science & Technology

    2015-04-28

    lossy components on near- and long-term experimental goals, we simulate themultiplexed sources when used formany- photon state generation under various...efficient integer factorization and digital quantum simulation [7, 8], which relies critically on the development of a high-performance, on-demand photon ...SPDC) or spontaneous four-wave mixing: parametric processes which use a pump laser in a nonlinearmaterial to spontaneously generate photon pairs

  3. A Stigmergy Approach for Open Source Software Developer Community Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Beaver, Justin M; Potok, Thomas E

    2009-01-01

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agentmore » behaviors selection probability.« less

  4. Business intelligence tools for radiology: creating a prototype model using open-source tools.

    PubMed

    Prevedello, Luciano M; Andriole, Katherine P; Hanson, Richard; Kelly, Pauline; Khorasani, Ramin

    2010-04-01

    Digital radiology departments could benefit from the ability to integrate and visualize data (e.g. information reflecting complex workflow states) from all of their imaging and information management systems in one composite presentation view. Leveraging data warehousing tools developed in the business world may be one way to achieve this capability. In total, the concept of managing the information available in this data repository is known as Business Intelligence or BI. This paper describes the concepts used in Business Intelligence, their importance to modern Radiology, and the steps used in the creation of a prototype model of a data warehouse for BI using open-source tools.

  5. Digital Image Correlation from Commercial to FOS Software: a Mature Technique for Full-Field Displacement Measurements

    NASA Astrophysics Data System (ADS)

    Belloni, V.; Ravanelli, R.; Nascetti, A.; Di Rita, M.; Mattei, D.; Crespi, M.

    2018-05-01

    In the last few decades, there has been a growing interest in studying non-contact methods for full-field displacement and strain measurement. Among such techniques, Digital Image Correlation (DIC) has received particular attention, thanks to its ability to provide these information by comparing digital images of a sample surface before and after deformation. The method is now commonly adopted in the field of civil, mechanical and aerospace engineering and different companies and some research groups implemented 2D and 3D DIC software. In this work a review on DIC software status is given at first. Moreover, a free and open source 2D DIC software is presented, named py2DIC and developed in Python at the Geodesy and Geomatics Division of DICEA of the University of Rome "La Sapienza"; its potentialities were evaluated by processing the images captured during tensile tests performed in the Structural Engineering Lab of the University of Rome "La Sapienza" and comparing them to those obtained using the commercial software Vic-2D developed by Correlated Solutions Inc, USA. The agreement of these results at one hundredth of millimetre level demonstrate the possibility to use this open source software as a valuable 2D DIC tool to measure full-field displacements on the investigated sample surface.

  6. The role of open-source software in innovation and standardization in radiology.

    PubMed

    Erickson, Bradley J; Langer, Steve; Nagy, Paul

    2005-11-01

    The use of open-source software (OSS), in which developers release the source code to applications they have developed, is popular in the software industry. This is done to allow others to modify and improve software (which may or may not be shared back to the community) and to allow others to learn from the software. Radiology was an early participant in this model, supporting OSS that implemented the ACR-National Electrical Manufacturers Association (now Digital Imaging and Communications in Medicine) standard for medical image communications. In radiology and in other fields, OSS has promoted innovation and the adoption of standards. Popular OSS is of high quality because access to source code allows many people to identify and resolve errors. Open-source software is analogous to the peer-review scientific process: one must be able to see and reproduce results to understand and promote what is shared. The authors emphasize that support for OSS need not threaten vendors; most vendors embrace and benefit from standards. Open-source development does not replace vendors but more clearly defines their roles, typically focusing on areas in which proprietary differentiators benefit customers and on professional services such as implementation planning and service. Continued support for OSS is essential for the success of our field.

  7. Learning Analytics Platform, towards an Open Scalable Streaming Solution for Education

    ERIC Educational Resources Information Center

    Lewkow, Nicholas; Zimmerman, Neil; Riedesel, Mark; Essa, Alfred

    2015-01-01

    Next generation digital learning environments require delivering "just-in-time feedback" to learners and those who support them. Unlike traditional business intelligence environments, streaming data requires resilient infrastructure that can move data at scale from heterogeneous data sources, process the data quickly for use across…

  8. Playing, Debugging, Learning: A Proposal between Game and Instructional Designs via Extended Prototyping

    ERIC Educational Resources Information Center

    Gandolfi, Enrico

    2018-01-01

    This article investigates the phenomenon of open and participative development (e.g. beta testing, Kickstarter projects)--i.e. extended prototyping--in digital entertainment as a potential source of insights for instructional interventions. Despite the increasing popularity of this practice and the potential implications for educators and…

  9. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  10. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  11. Multinational Experiment 7: Access to the Global Commons. Objective 3.3 Lexicon and Abbreviations. Version 1.1

    DTIC Science & Technology

    2012-05-28

    operation in Europe OSINT Open Source Intelligence PDA Personal Digital Assistant SME Subject Matter Expert SWGCA Special Working Group on the...Form Approved Report Documentation Page OMB No. 0704-0/88 · 1 d’ the time for reviewing instructions, searching existing data sources , gathering and...illegal and covert activity of exploiting vulnerabilities and collecting protected information or intelligence in cyberspace (MNE 7 Outcome 3 Working

  12. Digital synchronization and communication techniques

    NASA Technical Reports Server (NTRS)

    Lindsey, William C.

    1992-01-01

    Information on digital synchronization and communication techniques is given in viewgraph form. Topics covered include phase shift keying, modems, characteristics of open loop digital synchronizers, an open loop phase and frequency estimator, and a digital receiver structure using an open loop estimator in a decision directed architecture.

  13. Open-Source GIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Burk, Thomas E; Lime, Steve

    2012-01-01

    The components making up an Open Source GIS are explained in this chapter. A map server (Sect. 30.1) can broadly be defined as a software platform for dynamically generating spatially referenced digital map products. The University of Minnesota MapServer (UMN Map Server) is one such system. Its basic features are visualization, overlay, and query. Section 30.2 names and explains many of the geospatial open source libraries, such as GDAL and OGR. The other libraries are FDO, JTS, GEOS, JCS, MetaCRS, and GPSBabel. The application examples include derived GIS-software and data format conversions. Quantum GIS, its origin and its applications explainedmore » in detail in Sect. 30.3. The features include a rich GUI, attribute tables, vector symbols, labeling, editing functions, projections, georeferencing, GPS support, analysis, and Web Map Server functionality. Future developments will address mobile applications, 3-D, and multithreading. The origins of PostgreSQL are outlined and PostGIS discussed in detail in Sect. 30.4. It extends PostgreSQL by implementing the Simple Feature standard. Section 30.5 details the most important open source licenses such as the GPL, the LGPL, the MIT License, and the BSD License, as well as the role of the Creative Commons.« less

  14. DSPSR: Digital Signal Processing Software for Pulsar Astronomy

    NASA Astrophysics Data System (ADS)

    van Straten, W.; Bailes, M.

    2010-10-01

    DSPSR, written primarily in C++, is an open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. The library implements an extensive range of modular algorithms for use in coherent dedispersion, filterbank formation, pulse folding, and other tasks. The software is installed and compiled using the standard GNU configure and make system, and is able to read astronomical data in 18 different file formats, including FITS, S2, CPSR, CPSR2, PuMa, PuMa2, WAPP, ASP, and Mark5.

  15. Geology of Point Reyes National Seashore and vicinity, California: a digital database

    USGS Publications Warehouse

    Clark, Jospeh C.; Brabb, Earl E.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, a PostScript plot file containing an image of the geologic map sheet with explanation, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously published and unpublished data and new mapping by the authors, represents the general distribution of surficial deposits and rock units in Point Reyes and surrounding areas. Together with the accompanying text file (pr-geo.txt or pr-geo.ps), it provides current information on the stratigraphy and structural geology of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:48,000 or smaller.

  16. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    PubMed

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.

  17. A computational- And storage-cloud for integration of biodiversity collections

    USGS Publications Warehouse

    Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B

    2013-01-01

    A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.

  18. OpenPET: A Flexible Electronics System for Radiotracer Imaging

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Buckley, S.; Vu, C.; Peng, Q.; Pavlov, N.; Choong, W.-S.; Wu, J.; Jackson, C.

    2010-10-01

    We present the design for OpenPET, an electronics readout system designed for prototype radiotracer imaging instruments. The critical requirements are that it has sufficient performance, channel count, channel density, and power consumption to service a complete camera, and yet be simple, flexible, and customizable enough to be used with almost any detector or camera design. An important feature of this system is that each analog input is processed independently. Each input can be configured to accept signals of either polarity as well as either differential or ground referenced signals. Each signal is digitized by a continuously sampled ADC, which is processed by an FPGA to extract pulse height information. A leading edge discriminator creates a timing edge that is “time stamped” by a TDC implemented inside the FPGA. This digital information from each channel is sent to an FPGA that services 16 analog channels, and information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc. As all of this processing is controlled by firmware and software, it can be modified/customized easily. The system is open source, meaning that all technical data (specifications, schematics and board layout files, source code, and instructions) will be publicly available.

  19. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  20. Developing Digital Competences Using an Educational Software. A Pedagogical Research

    ERIC Educational Resources Information Center

    Magdas, Ioana; Bontea, Timea

    2011-01-01

    Many teachers and people in educational institutions consider it necessary to prepare children for living in a computerized society. The Internet offers an incredible number of opportunities for teachers. The Web offer of e-learning open source platforms reached an impressive configuration. In this article, we present an educational software for…

  1. QSAR DataBank - an approach for the digital organization and archiving of QSAR model information

    PubMed Central

    2014-01-01

    Background Research efforts in the field of descriptive and predictive Quantitative Structure-Activity Relationships or Quantitative Structure–Property Relationships produce around one thousand scientific publications annually. All the materials and results are mainly communicated using printed media. The printed media in its present form have obvious limitations when they come to effectively representing mathematical models, including complex and non-linear, and large bodies of associated numerical chemical data. It is not supportive of secondary information extraction or reuse efforts while in silico studies poses additional requirements for accessibility, transparency and reproducibility of the research. This gap can and should be bridged by introducing domain-specific digital data exchange standards and tools. The current publication presents a formal specification of the quantitative structure-activity relationship data organization and archival format called the QSAR DataBank (QsarDB for shorter, or QDB for shortest). Results The article describes QsarDB data schema, which formalizes QSAR concepts (objects and relationships between them) and QsarDB data format, which formalizes their presentation for computer systems. The utility and benefits of QsarDB have been thoroughly tested by solving everyday QSAR and predictive modeling problems, with examples in the field of predictive toxicology, and can be applied for a wide variety of other endpoints. The work is accompanied with open source reference implementation and tools. Conclusions The proposed open data, open source, and open standards design is open to public and proprietary extensions on many levels. Selected use cases exemplify the benefits of the proposed QsarDB data format. General ideas for future development are discussed. PMID:24910716

  2. Open Biomedical Engineering education in Africa.

    PubMed

    Ahluwalia, Arti; Atwine, Daniel; De Maria, Carmelo; Ibingira, Charles; Kipkorir, Emmauel; Kiros, Fasil; Madete, June; Mazzei, Daniele; Molyneux, Elisabeth; Moonga, Kando; Moshi, Mainen; Nzomo, Martin; Oduol, Vitalice; Okuonzi, John

    2015-08-01

    Despite the virtual revolution, the mainstream academic community in most countries remains largely ignorant of the potential of web-based teaching resources and of the expansion of open source software, hardware and rapid prototyping. In the context of Biomedical Engineering (BME), where human safety and wellbeing is paramount, a high level of supervision and quality control is required before open source concepts can be embraced by universities and integrated into the curriculum. In the meantime, students, more than their teachers, have become attuned to continuous streams of digital information, and teaching methods need to adapt rapidly by giving them the skills to filter meaningful information and by supporting collaboration and co-construction of knowledge using open, cloud and crowd based technology. In this paper we present our experience in bringing these concepts to university education in Africa, as a way of enabling rapid development and self-sufficiency in health care. We describe the three summer schools held in sub-Saharan Africa where both students and teachers embraced the philosophy of open BME education with enthusiasm, and discuss the advantages and disadvantages of opening education in this way in the developing and developed world.

  3. Low-cost, portable open-source gas monitoring device based on chemosensory technology

    NASA Astrophysics Data System (ADS)

    Gotor, Raúl; Gaviña, Pablo; Costero, Ana M.

    2015-08-01

    We report herein the construction of an electronic device to perform the real-time digitalization of the color state of the optical chemosensors used in the detection of dangerous gases. To construct the device, we used open-source modular electronics, such as Arduino and Sparkfun components, as well as free and open-source software (FOSS). The basic principle of the operation of this device is the continuous color measurement of a chemosensor-doped sensing film, whose color changes in the presence of a specific gas. The chemosensor-sensing film can be prepared by using any of the widely available chemosensors for the desired gas. Color measurement is taken by two TCS230 color sensor ICs, reported to the microcontroller, and the results are displayed on an LCD display and pushed through a USB serial port. By using a cyanide optical chemosensor, we demonstrated the operation of the device as a HCN gas detector at low concentrations.

  4. A new, open-source, multi-modality digital breast phantom

    NASA Astrophysics Data System (ADS)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  5. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  6. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    NASA Astrophysics Data System (ADS)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good success rates to objectively detect landslides in high-resolution topography data by GEOBIA.

  7. Smartphone based scalable reverse engineering by digital image correlation

    NASA Astrophysics Data System (ADS)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  8. Intellectual Property, Digital Technology and the Developing World

    NASA Astrophysics Data System (ADS)

    Pupillo, Lorenzo Maria

    This chapter provides an overview of how the converging ICTs are challenging the traditional off-line copyright doctrine and suggests how developing countries should approach issues such as copyright in the digital world, software (Protection, Open Source, Reverse Engineering), and data base protection. The balance of the chapter is organized into three sections. After the introduction, the second section explains how digital technology is dramatically changing the entertainment industry, what are the major challenges to the industry, and what are the approaches that the economic literature suggest to face the structural changes that the digital revolution is bringing forward. Starting from the assumption that IPRs frameworks need to be customized to the countries’ development needs, the third section makes recommendations on how developing countries should use copyright to support access to information and to creative industries.

  9. Rise of the Digitized Public Intellectual: Death of the Professor in the Network Neutral Internet Age

    ERIC Educational Resources Information Center

    Lange, Joshua

    2015-01-01

    The centralised discourse claiming ownership of "knowledge" and "higher education" seems to be declining as the decentralising discourse extolling open source software and informal social network communication are emerging: yet the two are complementary when higher education is seen as a commodity. Thus, in the internet age of…

  10. The Tacoma Narrows Bridge Collapse on Film and Video

    ERIC Educational Resources Information Center

    Olson, Don; Hook, Joseph; Doescher, Russell; Wolf, Steven

    2015-01-01

    This month marks the 75th anniversary of the Tacoma Narrows Bridge collapse. During a gale on Nov. 7, 1940, the bridge exhibited remarkable oscillations before collapsing spectacularly (Figs. 1-5). Physicists over the years have spent a great deal of time and energy studying this event. By using open-source analysis tools and digitized footage of…

  11. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  12. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot.

    PubMed

    Kitson, Philip J; Glatzel, Stefan; Cronin, Leroy

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic 'programs' which can run on similar low cost, user-constructed robotic platforms towards an 'open-source' regime in the area of chemical synthesis.

  13. The use of open data from social media for the creation of 3D georeferenced modeling

    NASA Astrophysics Data System (ADS)

    Themistocleous, Kyriacos

    2016-08-01

    There is a great deal of open source video on the internet that is posted by users on social media sites. With the release of low-cost unmanned aerial vehicles, many hobbyists are uploading videos from different locations, especially in remote areas. Using open source data that is available on the internet, this study utilized structure to motion (SfM) as a range imaging technique to estimate 3 dimensional landscape features from 2 dimensional image sequences subtracted from video, applied image distortion correction and geo-referencing. This type of documentation may be necessary for cultural heritage sites that are inaccessible or documentation is difficult, where we can access video from Unmanned Aerial Vehicles (UAV). These 3D models can be viewed using Google Earth, create orthoimage, drawings and create digital terrain modeling for cultural heritage and archaeological purposes in remote or inaccessible areas.

  14. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  15. Quaternary Geology and Liquefaction Susceptibility, San Francisco, California 1:100,000 Quadrangle: A Digital Database

    USGS Publications Warehouse

    Knudsen, Keith L.; Noller, Jay S.; Sowers, Janet M.; Lettis, William R.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There are no paper maps included in the Open-File report. The report does include, however, PostScript plot files containing the images of the geologic map sheets with explanations, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously unpublished data, and new mapping by the authors, represents the general distribution of surficial deposits in the San Francisco bay region. Together with the accompanying text file (sf_geo.txt or sf_geo.pdf), it provides current information on Quaternary geology and liquefaction susceptibility of the San Francisco, California, 1:100,000 quadrangle. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller. The content and character of the database, as well as three methods of obtaining the database, are described below.

  16. Open-Source Automated Mapping Four-Point Probe.

    PubMed

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  17. Histomorphometric Parameters of the Growth Plate and Trabecular Bone in Wild-Type and Trefoil Factor Family 3 (Tff3)-Deficient Mice Analyzed by Free and Open-Source Image Processing Software.

    PubMed

    Bijelić, Nikola; Belovari, Tatjana; Stolnik, Dunja; Lovrić, Ivana; Baus Lončar, Mirela

    2017-08-01

    Trefoil factor family 3 (Tff3) peptide is present during intrauterine endochondral ossification in mice, and its deficiency affects cancellous bone quality in secondary ossification centers of mouse tibiae. The aim of this study was to quantitatively analyze parameters describing the growth plate and primary ossification centers in tibiae of 1-month-old wild-type and Tff3 knock-out mice (n=5 per genotype) by using free and open-source software. Digital photographs of the growth plates and trabecular bone were processed by open-source computer programs GIMP and FIJI. Histomorphometric parameters were calculated using measurements made with FIJI. Tff3 knock-out mice had significantly smaller trabecular number and significantly larger trabecular separation. Trabecular bone volume, trabecular bone surface, and trabecular thickness showed no significant difference between the two groups. Although such histomorphological differences were found in the cancellous bone structure, no significant differences were found in the epiphyseal plate histomorphology. Tff3 peptide probably has an effect on the formation and quality of the cancellous bone in the primary ossification centers, but not through disrupting the epiphyseal plate morphology. This work emphasizes the benefits of using free and open-source programs for morphological studies in life sciences.

  18. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    PubMed

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  19. Automated Selection of Hotspots (ASH): enhanced automated segmentation and adaptive step finding for Ki67 hotspot detection in adrenal cortical cancer.

    PubMed

    Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P

    2014-11-25

    In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.

  20. Inexpensive, Low Power, Open-Source Data Logging in the Field

    NASA Astrophysics Data System (ADS)

    Sandell, C. T.; Wickert, A. D.

    2016-12-01

    Collecting a robust data set of environmental conditions with commercial equipment is often cost prohibitive. I present the ALog, a general-purpose, inexpensive, low-power, open-source data logger that has proven its durability on long-term deployments in the harsh conditions of high altitude glaciers and humid river deltas. The ALog was developed to fill the need for a capable, rugged, easy-to-use, inexpensive, open-source hardware targeted at long-term remote deployment in nearly any environment. Building on the popular Arduino platform, the hardware features a high-precision clock, full size SD card slot for high-volume data storage, screw terminals, six analog inputs, two digital inputs, one digital interrupt, 3.3V and 5V power outputs, and SPI and I2C communication capability. The design is focused on extremely low power consumption allowing the Alog to be deployed for years on a single set of common alkaline batteries. The power efficiency of the Alog eliminates the difficulties associated with field power collection including additional hardware and installation costs, dependence on weather conditions, possible equipment failure, and the transport of bulky/heavy equipment to a remote site. Battery power increases suitable data collection sites (too shaded for photovoltaics) and allows for low profile installation options (including underground). The ALog has gone through continuous development with over four years of successful data collection in hydrologic field research. Over this time, software support for a wide range of sensors has been made available such as ultrasonic rangefinders (for water level, snow accumulation and glacial melt), temperature sensors (air and groundwater), humidity sensors, pyranometers, inclinometers, rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that trigger on events. The software developed for use with the ALog allows simple integration of established commercial sensors, including example implementation code so users with limited programming knowledge can get up and running with ease. All development files including design schematics, circuit board layouts, and source code files are open-source to further eliminate barriers to its use and allow community development contribution.

  1. Device and Method of Scintillating Quantum Dots for Radiation Imaging

    NASA Technical Reports Server (NTRS)

    Burke, Eric R. (Inventor); DeHaven, Stanton L. (Inventor); Williams, Phillip A. (Inventor)

    2017-01-01

    A radiation imaging device includes a radiation source and a micro structured detector comprising a material defining a surface that faces the radiation source. The material includes a plurality of discreet cavities having openings in the surface. The detector also includes a plurality of quantum dots disclosed in the cavities. The quantum dots are configured to interact with radiation from the radiation source, and to emit visible photons that indicate the presence of radiation. A digital camera and optics may be used to capture images formed by the detector in response to exposure to radiation.

  2. Flutrack.org: Open-source and linked data for epidemiology.

    PubMed

    Chorianopoulos, Konstantinos; Talvis, Karolos

    2016-12-01

    Epidemiology has made advances, thanks to the availability of real-time surveillance data and by leveraging the geographic analysis of incidents. There are many health information systems that visualize the symptoms of influenza-like illness on a digital map, which is suitable for end-users, but it does not afford further processing and analysis. Existing systems have emphasized the collection, analysis, and visualization of surveillance data, but they have neglected a modular and interoperable design that integrates high-resolution geo-location with real-time data. As a remedy, we have built an open-source project and we have been operating an open service that detects flu-related symptoms and shares the data in real-time with anyone who wants to built upon this system. An analysis of a small number of precisely geo-located status updates (e.g. Twitter) correlates closely with the Google Flu Trends and the Centers for Disease Control and Prevention flu-positive reports. We suggest that public health information systems should embrace an open-source approach and offer linked data, in order to facilitate the development of an ecosystem of applications and services, and in order to be transparent to the general public interest. © The Author(s) 2015.

  3. Digital phonocardiographic experiments and signal processing in multidisciplinary fields of university education

    NASA Astrophysics Data System (ADS)

    Nagy, Tamás; Vadai, Gergely; Gingl, Zoltán

    2017-09-01

    Modern measurement of physical signals is based on the use of sensors, electronic signal conditioning, analog-to-digital conversion and digital signal processing carried out by dedicated software. The same signal chain is used in many devices such as home appliances, automotive electronics, medical instruments, and smartphones. Teaching the theoretical, experimental, and signal processing background must be an essential part of improving the standard of higher education, and it fits well to the increasingly multidisciplinary nature of physics and engineering too. In this paper, we show how digital phonocardiography can be used in university education as a universal, highly scalable, exciting, and inspiring laboratory practice and as a demonstration at various levels and complexity. We have developed open-source software templates in modern programming languages to support immediate use and to serve as a basis of further modifications using personal computers, tablets, and smartphones.

  4. Open-Source Digital Elevation Model (DEMs) Evaluation with GPS and LiDAR Data

    NASA Astrophysics Data System (ADS)

    Khalid, N. F.; Din, A. H. M.; Omar, K. M.; Khanan, M. F. A.; Omar, A. H.; Hamid, A. I. A.; Pa'suya, M. F.

    2016-09-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer-Global Digital Elevation Model (ASTER GDEM), Shuttle Radar Topography Mission (SRTM), and Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are freely available Digital Elevation Model (DEM) datasets for environmental modeling and studies. The quality of spatial resolution and vertical accuracy of the DEM data source has a great influence particularly on the accuracy specifically for inundation mapping. Most of the coastal inundation risk studies used the publicly available DEM to estimated the coastal inundation and associated damaged especially to human population based on the increment of sea level. In this study, the comparison between ground truth data from Global Positioning System (GPS) observation and DEM is done to evaluate the accuracy of each DEM. The vertical accuracy of SRTM shows better result against ASTER and GMTED10 with an RMSE of 6.054 m. On top of the accuracy, the correlation of DEM is identified with the high determination of coefficient of 0.912 for SRTM. For coastal zone area, DEMs based on airborne light detection and ranging (LiDAR) dataset was used as ground truth data relating to terrain height. In this case, the LiDAR DEM is compared against the new SRTM DEM after applying the scale factor. From the findings, the accuracy of the new DEM model from SRTM can be improved by applying scale factor. The result clearly shows that the value of RMSE exhibit slightly different when it reached 0.503 m. Hence, this new model is the most suitable and meets the accuracy requirement for coastal inundation risk assessment using open source data. The suitability of these datasets for further analysis on coastal management studies is vital to assess the potentially vulnerable areas caused by coastal inundation.

  5. Inselect: Automating the Digitization of Natural History Collections

    PubMed Central

    Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.

    2015-01-01

    The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208

  6. Inselect: Automating the Digitization of Natural History Collections.

    PubMed

    Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S

    2015-01-01

    The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.

  7. Three-Dimensional Printing of X-Ray Computed Tomography Datasets with Multiple Materials Using Open-Source Data Processing

    ERIC Educational Resources Information Center

    Sander, Ian M.; McGoldrick, Matthew T.; Helms, My N.; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W. Matthew

    2017-01-01

    Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing…

  8. Design and Development of an Institutional Repository at the Indian Institute of Technology Kharagpur

    ERIC Educational Resources Information Center

    Sutradhar, B.

    2006-01-01

    Purpose: To describe how an institutional repository (IR) was set up, using open source software, at the Indian Institute of Technology (IIT) in Kharagpur. Members of the IIT can publish their research documents in the IR for online access as well as digital preservation. Material in this IR includes instructional materials, records, data sets,…

  9. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    ERIC Educational Resources Information Center

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  10. The Programmers' Collective: Fostering Participatory Culture by Making Music Videos in a High School Scratch Coding Workshop

    ERIC Educational Resources Information Center

    Fields, Deborah; Vasudevan, Veena; Kafai, Yasmin B.

    2015-01-01

    We highlight ways to support interest-driven creation of digital media in Scratch, a visual-based programming language and community, within a high school programming workshop. We describe a collaborative approach, the programmers' collective, that builds on social models found in do-it-yourself and open source communities, but with scaffolding…

  11. Digital data collection in paleoanthropology.

    PubMed

    Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay

    2015-01-01

    Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.

  12. 3D Printing of CT Dataset: Validation of an Open Source and Consumer-Available Workflow.

    PubMed

    Bortolotto, Chandra; Eshja, Esmeralda; Peroni, Caterina; Orlandi, Matteo A; Bizzotto, Nicola; Poggi, Paolo

    2016-02-01

    The broad availability of cheap three-dimensional (3D) printing equipment has raised the need for a thorough analysis on its effects on clinical accuracy. Our aim is to determine whether the accuracy of 3D printing process is affected by the use of a low-budget workflow based on open source software and consumer's commercially available 3D printers. A group of test objects was scanned with a 64-slice computed tomography (CT) in order to build their 3D copies. CT datasets were elaborated using a software chain based on three free and open source software. Objects were printed out with a commercially available 3D printer. Both the 3D copies and the test objects were measured using a digital professional caliper. Overall, the objects' mean absolute difference between test objects and 3D copies is 0.23 mm and the mean relative difference amounts to 0.55 %. Our results demonstrate that the accuracy of 3D printing process remains high despite the use of a low-budget workflow.

  13. GeNeDA: An Open-Source Workflow for Design Automation of Gene Regulatory Networks Inspired from Microelectronics.

    PubMed

    Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques

    2016-10-01

    The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.

  14. NeuroPG: open source software for optical pattern generation and data acquisition

    PubMed Central

    Avants, Benjamin W.; Murphy, Daniel B.; Dapello, Joel A.; Robinson, Jacob T.

    2015-01-01

    Patterned illumination using a digital micromirror device (DMD) is a powerful tool for optogenetics. Compared to a scanning laser, DMDs are inexpensive and can easily create complex illumination patterns. Combining these complex spatiotemporal illumination patterns with optogenetics allows DMD-equipped microscopes to probe neural circuits by selectively manipulating the activity of many individual cells or many subcellular regions at the same time. To use DMDs to study neural activity, scientists must develop specialized software to coordinate optical stimulation patterns with the acquisition of electrophysiological and fluorescence data. To meet this growing need we have developed an open source optical pattern generation software for neuroscience—NeuroPG—that combines, DMD control, sample visualization, and data acquisition in one application. Built on a MATLAB platform, NeuroPG can also process, analyze, and visualize data. The software is designed specifically for the Mightex Polygon400; however, as an open source package, NeuroPG can be modified to incorporate any data acquisition, imaging, or illumination equipment that is compatible with MATLAB’s Data Acquisition and Image Acquisition toolboxes. PMID:25784873

  15. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  16. Influence of Elevation Data Source on 2D Hydraulic Modelling

    NASA Astrophysics Data System (ADS)

    Bakuła, Krzysztof; StĘpnik, Mateusz; Kurczyński, Zdzisław

    2016-08-01

    The aim of this paper is to analyse the influence of the source of various elevation data on hydraulic modelling in open channels. In the research, digital terrain models from different datasets were evaluated and used in two-dimensional hydraulic models. The following aerial and satellite elevation data were used to create the representation of terrain-digital terrain model: airborne laser scanning, image matching, elevation data collected in the LPIS, EuroDEM, and ASTER GDEM. From the results of five 2D hydrodynamic models with different input elevation data, the maximum depth and flow velocity of water were derived and compared with the results of the most accurate ALS data. For such an analysis a statistical evaluation and differences between hydraulic modelling results were prepared. The presented research proved the importance of the quality of elevation data in hydraulic modelling and showed that only ALS and photogrammetric data can be the most reliable elevation data source in accurate 2D hydraulic modelling.

  17. Surface Model and Tomographic Archive of Fossil Primate and Other Mammal Holotype and Paratype Specimens of the Ditsong National Museum of Natural History, Pretoria, South Africa.

    PubMed

    Adams, Justin W; Olah, Angela; McCurry, Matthew R; Potze, Stephany

    2015-01-01

    Nearly a century of paleontological excavation and analysis from the cave deposits of the Cradle of Humankind UNESCO World Heritage Site in northeastern South Africa underlies much of our understanding of the evolutionary history of hominins, other primates and other mammal lineages in the late Pliocene and early Pleistocene of Africa. As one of few designated fossil repositories, the Plio-Pleistocene Palaeontology Section of the Ditsong National Museum of Natural History (DNMNH; the former Transvaal Museum) curates much of the mammalian faunas recovered from the fossil-rich deposits of major South African hominin-bearing localities, including the holotype and paratype specimens of many primate, carnivore, and other mammal species (Orders Primates, Carnivora, Artiodactyla, Eulipotyphla, Hyracoidea, Lagomorpha, Perissodactyla, and Proboscidea). Here we describe an open-access digital archive of high-resolution, full-color three-dimensional (3D) surface meshes of all 89 non-hominin holotype, paratype and significant mammalian specimens curated in the Plio-Pleistocene Section vault. Surface meshes were generated using a commercial surface scanner (Artec Spider, Artec Group, Luxembourg), are provided in formats that can be opened in both open-source and commercial software, and can be readily downloaded either via an online data repository (MorphoSource) or via direct request from the DNMNH. In addition to providing surface meshes for each specimen, we also provide tomographic data (both computerized tomography [CT] and microfocus [microCT]) for a subset of these fossil specimens. This archive of the DNMNH Plio-Pleistocene collections represents the first research-quality 3D datasets of African mammal fossils to be made openly available. This simultaneously provides the paleontological community with essential baseline information (e.g., updated listing and 3D record of specimens in their current state of preservation) and serves as a single resource of high-resolution digital data that improves collections accessibility, reduces unnecessary duplication of efforts by researchers, and encourages ongoing imaging-based paleobiological research across a range of South African non-hominin fossil faunas. Because the types, paratypes, and key specimens include globally-distributed mammal taxa, this digital archive not only provides 3D morphological data on taxa fundamental to Neogene and Quaternary South African palaeontology, but also lineages critical to research on African, other Old World, and New World paleocommunities. With such a broader impact of the DNMNH 3D data, we hope that establishing open access to this digital archive will encourage other researchers and institutions to provide similar resources that increase accessibility to paleontological collections and support advanced paleobiological analyses.

  18. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  19. RTSPM: real-time Linux control software for scanning probe microscopy.

    PubMed

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  20. WHOI and SIO (I): Next Steps toward Multi-Institution Archiving of Shipboard and Deep Submergence Vehicle Data

    NASA Astrophysics Data System (ADS)

    Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.

    2005-12-01

    The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.

  1. MicMac GIS application: free open source

    NASA Astrophysics Data System (ADS)

    Duarte, L.; Moutinho, O.; Teodoro, A.

    2016-10-01

    The use of Remotely Piloted Aerial System (RPAS) for remote sensing applications is becoming more frequent as the technologies on on-board cameras and the platform itself are becoming a serious contender to satellite and airplane imagery. MicMac is a photogrammetric tool for image matching that can be used in different contexts. It is an open source software and it can be used as a command line or with a graphic interface (for each command). The main objective of this work was the integration of MicMac with QGIS, which is also an open source software, in order to create a new open source tool applied to photogrammetry/remote sensing. Python language was used to develop the application. This tool would be very useful in the manipulation and 3D modelling of a set of images. The main objective was to create a toolbar in QGIS with the basic functionalities with intuitive graphic interfaces. The toolbar is composed by three buttons: produce the points cloud, create the Digital Elevation Model (DEM) and produce the orthophoto of the study area. The application was tested considering 35 photos, a subset of images acquired by a RPAS in the Aguda beach area, Porto, Portugal. They were used in order to create a 3D terrain model and from this model obtain an orthophoto and the corresponding DEM. The code is open and can be modified according to the user requirements. This integration would be very useful in photogrammetry and remote sensing community combined with GIS capabilities.

  2. Teaching, Learning, and Sharing Openly Online

    ERIC Educational Resources Information Center

    O'Byrne, W. Ian; Roberts, Verena; LaBonte, Randy; Graham, Lee

    2015-01-01

    Open learning is becoming a critical focus for K-12 technology-supported programs as the importance of digital literacy and digital freedoms for all learners grows. This article describes current open learning policy, open educational resources and potential implications for open practice and ends with suggestions for future research in open…

  3. Examining Digital Literacy Competences and Learning Habits of Open and Distance Learners

    ERIC Educational Resources Information Center

    Ozdamar-Keskin, Nilgun; Ozata, Fatma Zeynep; Banar, Kerim; Royle, Karl

    2015-01-01

    The purpose of the study is to examine digital literacy competences and learning habits of learners enrolled in the open and distance education system of Anadolu University in Turkey. Data were gathered from 20.172 open and distance learners through a survey which included four parts: demographic information, abilities to use digital technologies,…

  4. Open-Source Automated Mapping Four-Point Probe

    PubMed Central

    Chandra, Handy; Allen, Spencer W.; Oberloier, Shane W.; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M.

    2017-01-01

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas. PMID:28772471

  5. Digital Savings: A Study of Academic Libraries Finds that Going from Print to Electronic Journals Can Save Money, if It's Done Right, but Challenges Remain

    ERIC Educational Resources Information Center

    Schonfeld, Roger C.; Fenton, Eileen Gifford

    2005-01-01

    Without question, the ongoing transition from print to electronic periodicals has challenged librarians to rethink their strategies. While some effects of this change have been immediately apparent--greater breadth of material, easier access, exposure to new sources, publisher package deals, and open access--the broader outcomes on library…

  6. Strengthened IAEA Safeguards-Imagery Analysis: Geospatial Tools for Nonproliferation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pabian, Frank V

    2012-08-14

    This slide presentation focuses on the growing role and importance of imagery analysis for IAEA safeguards applications and how commercial satellite imagery, together with the newly available geospatial tools, can be used to promote 'all-source synergy.' As additional sources of openly available information, satellite imagery in conjunction with the geospatial tools can be used to significantly augment and enhance existing information gathering techniques, procedures, and analyses in the remote detection and assessment of nonproliferation relevant activities, facilities, and programs. Foremost of the geospatial tools are the 'Digital Virtual Globes' (i.e., GoogleEarth, Virtual Earth, etc.) that are far better than previouslymore » used simple 2-D plan-view line drawings for visualization of known and suspected facilities of interest which can be critical to: (1) Site familiarization and true geospatial context awareness; (2) Pre-inspection planning; (3) Onsite orientation and navigation; (4) Post-inspection reporting; (5) Site monitoring over time for changes; (6) Verification of states site declarations and for input to State Evaluation reports; and (7) A common basis for discussions among all interested parties (Member States). Additionally, as an 'open-source', such virtual globes can also provide a new, essentially free, means to conduct broad area search for undeclared nuclear sites and activities - either alleged through open source leads; identified on internet BLOGS and WIKI Layers, with input from a 'free' cadre of global browsers and/or by knowledgeable local citizens (a.k.a.: 'crowdsourcing'), that can include ground photos and maps; or by other initiatives based on existing information and in-house country knowledge. They also provide a means to acquire ground photography taken by locals, hobbyists, and tourists of the surrounding locales that can be useful in identifying and discriminating between relevant and non-relevant facilities and their associated infrastructure. The digital globes also provide highly accurate terrain mapping for better geospatial context and allow detailed 3-D perspectives of all sites or areas of interest. 3-D modeling software (i.e., Google's SketchUp6 newly available in 2007) when used in conjunction with these digital globes can significantly enhance individual building characterization and visualization (including interiors), allowing for better assessments including walk-arounds or fly-arounds and perhaps better decision making on multiple levels (e.g., the best placement for International Atomic Energy Agency (IAEA) video monitoring cameras).« less

  7. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  8. Mobile service for open data visualization on geo-based images

    NASA Astrophysics Data System (ADS)

    Lee, Kiwon; Kim, Kwangseob; Kang, Sanggoo

    2015-12-01

    Since the early 2010s, governments in most countries have adopted and promoted open data policy and open data platform. Korea are in the same situation, and government and public organizations have operated the public-accessible open data portal systems since 2011. The number of open data and data type have been increasing every year. These trends are more expandable or extensible on mobile environments. The purpose of this study is to design and implement a mobile application service to visualize various typed or formatted public open data with geo-based images on the mobile web. Open data cover downloadable data sets or open-accessible data application programming interface API. Geo-based images mean multi-sensor satellite imageries which are referred in geo-coordinates and matched with digital map sets. System components for mobile service are fully based on open sources and open development environments without any commercialized tools: PostgreSQL for database management system, OTB for remote sensing image processing, GDAL for data conversion, GeoServer for application server, OpenLayers for mobile web mapping, R for data analysis and D3.js for web-based data graphic processing. Mobile application in client side was implemented by using HTML5 for cross browser and cross platform. The result shows many advantageous points such as linking open data and geo-based data, integrating open data and open source, and demonstrating mobile applications with open data. It is expected that this approach is cost effective and process efficient implementation strategy for intelligent earth observing data.

  9. An open source digital servo for atomic, molecular, and optical physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leibrandt, D. R., E-mail: david.leibrandt@nist.gov; Heidecker, J.

    2015-12-15

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of themore » laser used to probe the narrow clock transition of {sup 27}Al{sup +} in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.« less

  10. An open source digital servo for atomic, molecular, and optical physics experiments.

    PubMed

    Leibrandt, D R; Heidecker, J

    2015-12-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of (27)Al(+) in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.

  11. An open source digital servo for atomic, molecular, and optical physics experiments

    NASA Astrophysics Data System (ADS)

    Leibrandt, D. R.; Heidecker, J.

    2015-12-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.

  12. An open source digital servo for atomic, molecular, and optical physics experiments

    PubMed Central

    Leibrandt, D. R.; Heidecker, J.

    2016-01-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser. PMID:26724014

  13. DigitSeis: A New Digitization Software and its Application to the Harvard-Adam Dziewoński Observatory Collection

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Altoé, I. L.; Karamitrou, A.; Ishii, M.; Ishii, H.

    2015-12-01

    DigitSeis is a new open-source, interactive digitization software written in MATLAB that converts digital, raster images of analog seismograms to readily usable, discretized time series using image processing algorithms. DigitSeis automatically identifies and corrects for various geometrical distortions of seismogram images that are acquired through the original recording, storage, and scanning procedures. With human supervision, the software further identifies and classifies important features such as time marks and notes, corrects time-mark offsets from the main trace, and digitizes the combined trace with an analysis to obtain as accurate timing as possible. Although a large effort has been made to minimize the human input, DigitSeis provides interactive tools for challenging situations such as trace crossings and stains in the paper. The effectiveness of the software is demonstrated with the digitization of seismograms that are over half a century old from the Harvard-Adam Dziewoński observatory that is still in operation as a part of the Global Seismographic Network (station code HRV and network code IU). The spectral analysis of the digitized time series shows no spurious features that may be related to the occurrence of minute and hour marks. They also display signals associated with significant earthquakes, and a comparison of the spectrograms with modern recordings reveals similarities in the background noise.

  14. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    PubMed

    Dryden, Michael D M; Wheeler, Aaron R

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.

  15. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  16. DigitalHuman (DH): An Integrative Mathematical Model ofHuman Physiology

    NASA Technical Reports Server (NTRS)

    Hester, Robert L.; Summers, Richard L.; lIescu, Radu; Esters, Joyee; Coleman, Thomas G.

    2010-01-01

    Mathematical models and simulation are important tools in discovering the key causal relationships governing physiological processes and improving medical intervention when physiological complexity is a central issue. We have developed a model of integrative human physiology called DigitalHuman (DH) consisting of -5000 variables modeling human physiology describing cardiovascular, renal, respiratory, endocrine, neural and metabolic physiology. Users can view time-dependent solutions and interactively introduce perturbations by altering numerical parameters to investigate new hypotheses. The variables, parameters and quantitative relationships as well as all other model details are described in XML text files. All aspects of the model, including the mathematical equations describing the physiological processes are written in XML open source, text-readable files. Model structure is based upon empirical data of physiological responses documented within the peer-reviewed literature. The model can be used to understand proposed physiological mechanisms and physiological interactions that may not be otherwise intUitively evident. Some of the current uses of this model include the analyses of renal control of blood pressure, the central role of the liver in creating and maintaining insulin resistance, and the mechanisms causing orthostatic hypotension in astronauts. Additionally the open source aspect of the modeling environment allows any investigator to add detailed descriptions of human physiology to test new concepts. The model accurately predicts both qualitative and more importantly quantitative changes in clinically and experimentally observed responses. DigitalHuman provides scientists a modeling environment to understand the complex interactions of integrative physiology. This research was supported by.NIH HL 51971, NSF EPSCoR, and NASA

  17. Digital geologic map of part of the Thompson Falls 1:100,000 quadrangle, Idaho

    USGS Publications Warehouse

    Lewis, Reed S.; Derkey, Pamela D.

    1999-01-01

    The geology of the Thompson Falls 1:100,000 quadrangle, Idaho was compiled by Reed S. Lewis in 1997 onto a 1:100,000-scale greenline mylar of the topographic base map for input into a geographic information system (GIS). The resulting digital geologic map GIS can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000). The map area is located in north Idaho. This open-file report describes the geologic map units, the methods used to convert the geologic map data into a digital format, the Arc/Info GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet.

  18. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  19. The Image Gently pediatric digital radiography safety checklist: tools for improving pediatric radiography.

    PubMed

    John, Susan D; Moore, Quentin T; Herrmann, Tracy; Don, Steven; Powers, Kevin; Smith, Susan N; Morrison, Greg; Charkot, Ellen; Mills, Thalia T; Rutz, Lois; Goske, Marilyn J

    2013-10-01

    Transition from film-screen to digital radiography requires changes in radiographic technique and workflow processes to ensure that the minimum radiation exposure is used while maintaining diagnostic image quality. Checklists have been demonstrated to be useful tools for decreasing errors and improving safety in several areas, including commercial aviation and surgical procedures. The Image Gently campaign, through a competitive grant from the FDA, developed a checklist for technologists to use during the performance of digital radiography in pediatric patients. The checklist outlines the critical steps in digital radiography workflow, with an emphasis on steps that affect radiation exposure and image quality. The checklist and its accompanying implementation manual and practice quality improvement project are open source and downloadable at www.imagegently.org. The authors describe the process of developing and testing the checklist and offer suggestions for using the checklist to minimize radiation exposure to children during radiography. Copyright © 2013 American College of Radiology. All rights reserved.

  20. Digital Access to a Sky Century at Harvard: Initial Photometry and Astrometry

    NASA Astrophysics Data System (ADS)

    Laycock, S.; Tang, S.; Grindlay, J.; Los, E.; Simcoe, R.; Mink, D.

    2010-10-01

    Digital Access to a Sky Century at Harvard (DASCH) is a project to digitize the collection of ~500,000 glass photographic plates held at Harvard College Observatory. The collection spans the time period from 1880 to 1985, during which time every point on the sky was been observed from 500 to 1000 times. In this paper, we describe the DASCH commissioning run, during which we developed the data-reduction pipeline, characterized the plates and fine-tuned the digitizer's performance and operation. This initial run consisted of 500 plates taken from a variety of different plate series, all containing the open cluster Praeseppe (M44). We report that accurate photometry at the 0.1 mag level is possible on the majority of plates, and demonstrate century-long light curves of various types of variable stars in and around M44. DASCH will generate a public online archive of the entire plate collection, including images, source catalogs, and light curves for nearly all astronomical objects brighter than about 17th magnitude.

  1. Concept of a digital aerial platform for conducting observation flights under the open skies treaty. (Polish Title: Koncepcja cyfrowej platformy lotniczej do realizacji misji obserwacyjnych w ramach traktatu o otwartych przestworzach)

    NASA Astrophysics Data System (ADS)

    Walczykowski, P.; Orych, A.

    2013-12-01

    The Treaty on Open Skies, to which Poland is a signatory from the very beginning, was signed in 1992 in Helsinki. The main principle of the Treaty is increasing the openness of military activities conducted by the States-Parties and control over respecting disarmament agreements. Responsibilities given by the Treaty are fulfilled by conducting and receiving a given number of observation flights over the territories of the Treaty signatories. Among the 34 countries currently actively taking part in this Treaty only some own certified airplanes and observation sensors. Poland is within the group of countries who do not own their own platform and therefore fulfills Treaty requirements using the Ukrainian An-30b. Primarily, the Treaty only enabled the use of analogue sensors for the acquisition of imagery data. Together with the development of digital techniques, a rise in the need for digital imagery products had been noted. Currently digital photography is being used in almost ass fields of studies and everyday life. This has lead to very rapid developments in digital sensor technologies, employing the newest and most innovative solutions. Digital imagery products have many advantages and have now almost fully replaced traditional film sensors. Digital technologies have given rise to a new era in Open Skies. The Open Skies Consultative Commission, having conducted many series of tests, signed a new Decision to the Treaty, which allows for digital aerial sensors to be used during observation flights. The main aim of this article is to design a concept of choosing digital sensors and selecting an airplane, therefore a digital aerial platform, which could be used by Poland for Open Skies purposes. A thorough analysis of airplanes currently used by the Polish Air force was conducted in terms of their specifications and the possibility of their employment for Open Skies Treaty missions. Next, an analysis was conducted of the latest aerial digital sensors offered by leading commercial manufacturers. The sensors were analyzed in terms of the accordance of their specifications with the technical requirements of the Treaty.

  2. Supporting Indonesia's National Forest Monitoring System with LiDAR Observations

    NASA Astrophysics Data System (ADS)

    Hagen, S. C.

    2015-12-01

    Scientists at Applied GeoSolutions, Jet Propulsion Laboratory, Winrock International, and the University of New Hampshire are working with the government of Indonesia to enhance the National Forest Monitoring System in Kalimantan, Indonesia. The establishment of a reliable, transparent, and comprehensive NFMS has been limited by a dearth of relevant data that are accurate, low-cost, and spatially resolved at subnational scales. In this NASA funded project, we are developing, evaluating, and validating several critical components of a NFMS in Kalimantan, Indonesia, focusing on the use of LiDAR and radar imagery for improved carbon stock and forest degradation information. Applied GeoSolutions and the University of New Hampshire have developed an Open Source Software package to process large amounts LiDAR data quickly, easily, and accurately. The Open Source project is called lidar2dems and includes the classification of raw LAS point clouds and the creation of Digital Terrain Models (DTMs), Digital Surface Models (DSMs), and Canopy Height Models (CHMs). Preliminary estimates of forest structure and forest damage from logging from these data sets support the idea that comprehensive, well documented, freely available software for processing LiDAR data can enable countries such as Indonesia to cost effectively monitor their forests with high precision.

  3. Utilization of Open Source Technology to Create Cost-Effective Microscope Camera Systems for Teaching.

    PubMed

    Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur

    2018-01-01

    Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.

  4. Synthesis of phylogeny and taxonomy into a comprehensive tree of life

    PubMed Central

    Hinchliff, Cody E.; Smith, Stephen A.; Allman, James F.; Burleigh, J. Gordon; Chaudhary, Ruchi; Coghill, Lyndon M.; Crandall, Keith A.; Deng, Jiabin; Drew, Bryan T.; Gazis, Romina; Gude, Karl; Hibbett, David S.; Katz, Laura A.; Laughinghouse, H. Dail; McTavish, Emily Jane; Midford, Peter E.; Owen, Christopher L.; Ree, Richard H.; Rees, Jonathan A.; Soltis, Douglas E.; Williams, Tiffani; Cranston, Karen A.

    2015-01-01

    Reconstructing the phylogenetic relationships that unite all lineages (the tree of life) is a grand challenge. The paucity of homologous character data across disparately related lineages currently renders direct phylogenetic inference untenable. To reconstruct a comprehensive tree of life, we therefore synthesized published phylogenies, together with taxonomic classifications for taxa never incorporated into a phylogeny. We present a draft tree containing 2.3 million tips—the Open Tree of Life. Realization of this tree required the assembly of two additional community resources: (i) a comprehensive global reference taxonomy and (ii) a database of published phylogenetic trees mapped to this taxonomy. Our open source framework facilitates community comment and contribution, enabling the tree to be continuously updated when new phylogenetic and taxonomic data become digitally available. Although data coverage and phylogenetic conflict across the Open Tree of Life illuminate gaps in both the underlying data available for phylogenetic reconstruction and the publication of trees as digital objects, the tree provides a compelling starting point for community contribution. This comprehensive tree will fuel fundamental research on the nature of biological diversity, ultimately providing up-to-date phylogenies for downstream applications in comparative biology, ecology, conservation biology, climate change, agriculture, and genomics. PMID:26385966

  5. Synthesis of phylogeny and taxonomy into a comprehensive tree of life.

    PubMed

    Hinchliff, Cody E; Smith, Stephen A; Allman, James F; Burleigh, J Gordon; Chaudhary, Ruchi; Coghill, Lyndon M; Crandall, Keith A; Deng, Jiabin; Drew, Bryan T; Gazis, Romina; Gude, Karl; Hibbett, David S; Katz, Laura A; Laughinghouse, H Dail; McTavish, Emily Jane; Midford, Peter E; Owen, Christopher L; Ree, Richard H; Rees, Jonathan A; Soltis, Douglas E; Williams, Tiffani; Cranston, Karen A

    2015-10-13

    Reconstructing the phylogenetic relationships that unite all lineages (the tree of life) is a grand challenge. The paucity of homologous character data across disparately related lineages currently renders direct phylogenetic inference untenable. To reconstruct a comprehensive tree of life, we therefore synthesized published phylogenies, together with taxonomic classifications for taxa never incorporated into a phylogeny. We present a draft tree containing 2.3 million tips-the Open Tree of Life. Realization of this tree required the assembly of two additional community resources: (i) a comprehensive global reference taxonomy and (ii) a database of published phylogenetic trees mapped to this taxonomy. Our open source framework facilitates community comment and contribution, enabling the tree to be continuously updated when new phylogenetic and taxonomic data become digitally available. Although data coverage and phylogenetic conflict across the Open Tree of Life illuminate gaps in both the underlying data available for phylogenetic reconstruction and the publication of trees as digital objects, the tree provides a compelling starting point for community contribution. This comprehensive tree will fuel fundamental research on the nature of biological diversity, ultimately providing up-to-date phylogenies for downstream applications in comparative biology, ecology, conservation biology, climate change, agriculture, and genomics.

  6. Database of historically documented springs and spring flow measurements in Texas

    USGS Publications Warehouse

    Heitmuller, Franklin T.; Reece, Brian D.

    2003-01-01

    Springs are naturally occurring features that convey excess ground water to the land surface; they represent a transition from ground water to surface water. Water issues through one opening, multiple openings, or numerous seeps in the rock or soil. The database of this report provides information about springs and spring flow in Texas including spring names, identification numbers, location, and, if available, water source and use. This database does not include every spring in Texas, but is limited to an aggregation of selected digital and hard-copy data of the U.S. Geological Survey (USGS), the Texas Water Development Board (TWDB), and Capitol Environmental Services.

  7. Fracture Systems - Digital Field Data Capture

    NASA Astrophysics Data System (ADS)

    Haslam, Richard

    2017-04-01

    Fracture systems play a key role in subsurface resources and developments including groundwater and nuclear waste repositories. There is increasing recognition that there is a need to record and quantify fracture systems to better understand the potential risks and opportunities. With the advent of smart phones and digital field geology there have been numerous systems designed for field data collection. Digital field data collection allows for rapid data collection and interpretations. However, many of the current systems have principally been designed to cover the full range of field mapping and data needs, making them large and complex, plus many do not offer the tools necessary for the collection of fracture specific data. A new multiplatform data recording app has been developed for the collection of field data on faults and joint/fracture systems and a relational database designed for storage and retrieval. The app has been developed to collect fault data and joint/fracture data based on an open source platform. Data is captured in a form-based approach including validity checks to ensure data is collected systematically. In addition to typical structural data collection, the International Society of Rock Mechanics' (ISRM) "Suggested Methods for the Quantitative Description of Discontinuities in Rock Masses" is included allowing for industry standards to be followed and opening up the tools to industry as well as research. All data is uploaded automatically to a secure server and users can view their data and open access data as required. Users can decide if the data they produce should remain private or be open access. A series of automatic reports can be produced and/or the data downloaded. The database will hold a national archive and data retrieval will be made through a web interface.

  8. Open Data Infrastructures And The Future Of Science

    NASA Astrophysics Data System (ADS)

    Boulton, G. S.

    2016-12-01

    Open publication of the evidence (the data) supporting a scientific claim has been the bedrock on which the scientific advances of the modern era of science have been built. It is also of immense importance in confronting three challenges unleashed by the digital revolution. The first is the threat the digital data storm poses to the principle of "scientific self-correction", in which false concepts are weeded out because of a demonstrable failure in logic or in the replication of observations or experiments. Large and complex data volumes are difficult to make openly available in ways that make rigorous scrutiny possible. Secondly, linking and integrating data from different sources about the same phenomena have created profound new opportunities for understanding the Earth. If data are neither accessible nor useable, such opportunities cannot be seized. Thirdly, open access publication, open data and ubiquitous modern communications enhance the prospects for an era of "Open Science" in which science emerges from behind its laboratory doors to engage in co-production of knowledge with other stakeholders in addressing major contemporary challenges to human society, in particular the need for long term thinking about planetary sustainability. If the benefits of an open data regime are to be realised, only a small part of the challenge lies in providing "hard" infrastructure. The major challenges lie in the "soft" infrastructure of relationships between the components of national science systems, of analytic and software tools, of national and international standards and the normative principles adopted by scientists themselves. The principles that underlie these relationships, the responsibilities of key actors and the rules of the game needed to maximise national performance and facilitate international collaboration are set out in an International Accord on Open Data.

  9. Developing Critical L2 Digital Literacy through the Use of Computer-Based Internet-Hosted Learning Management Systems such as Moodle

    NASA Astrophysics Data System (ADS)

    Meurant, Robert C.

    Second Language (L2) Digital Literacy is of emerging importance within English as a Foreign Language (EFL) in Korea, and will evolve to become regarded as the most critical component of overall L2 English Literacy. Computer-based Internet-hosted Learning Management Systems (LMS), such as the popular open-source Moodle, are rapidly being adopted worldwide for distance education, and are also being applied to blended (hybrid) education. In EFL Education, they have a special potential: by setting the LMS to force English to be used exclusively throughout a course website, the meta-language can be made the target L2 language. Of necessity, students develop the ability to use English to navigate the Internet, access and contribute to online resources, and engage in computer-mediated communication. Through such pragmatic engagement with English, students significantly develop their L2 Digital Literacy.

  10. Proposal for internet-based Digital Dental Chart for personal dental identification in forensics.

    PubMed

    Hanaoka, Yoichi; Ueno, Asao; Tsuzuki, Tamiyuki; Kajiwara, Masahiro; Minaguchi, Kiyoshi; Sato, Yoshinobu

    2007-05-03

    A dental chart is very useful as a standard source of evidence in the personal identification of bodies. However, the kind of dental chart available will often vary as a number of types of odontogram have been developed where the visual representation of dental conditions has relied on hand-drawn representation. We propose the Digital Dental Chart (DDC) as a new style of dental chart, especially for open investigations aimed at establishing the identity of unknown bodies. Each DDC is constructed using actual oral digital images and dental data, and is easy to upload onto an Internet website. The DDC is a more useful forensic resource than the standard types of dental chart in current use as it has several advantages, among which are its ability to carry a large volume of information and reproduce dental conditions clearly and in detail on a cost-effective basis.

  11. Preliminary digital map of cryptocrystalline occurrences in northern Nevada

    USGS Publications Warehouse

    Moyer, Lorre A.

    1999-01-01

    The purpose was to identify potential cryptocrystalline material sources for tools used by indigenous people of the northern Nevada portion of the Great Basin. Cryptocrystalline occurrence data combed from the U.S. Geological Survey's Mineral Resources Data System (MRDS, 1995) were combined with sites described in Nevada rockhound guides and entered into a geographic information system (GIS). The map area encompasses northern Nevada (fig.1). This open-file report describes the methods used to convert cryptocrystalline occurrence data into a digital format, documents the file structures, and explains how to download the digital files from the U.S. Geological Survey's World Wide Web site. Uses of the spatial dataset include, but are not limited to, natural and cultural resource management, interdisciplinary activities, recreational rockhounding, and gold exploration. It is important to note that the accuracy of the spatial data varies widely, and for some purposes, field checks are advised.

  12. Managing an Open Access, Multi-Institutional, International Digital Library: The Digital Library of the Caribbean

    ERIC Educational Resources Information Center

    Wooldridge, Brooke; Taylor, Laurie; Sullivan, Mark

    2009-01-01

    Developing an Open Access, multi-institutional, multilingual, international digital library requires robust technological and institutional infrastructures that support both the needs of individual institutions alongside the needs of the growing partnership and ensure continuous communication and development of the shared vision for the digital…

  13. Strike Up the Score: Deriving Searchable and Playable Digital Formats from Sheet Music; Smart Objects and Open Archives; Building the Archives of the Future: Advanced in Preserving Electronic Records at the National Archives and Records Administration; From the Digitized to the Digital Library.

    ERIC Educational Resources Information Center

    Choudhury, G. Sayeed; DiLauro, Tim; Droettboom, Michael; Fujinaga, Ichiro; MacMillan, Karl; Nelson, Michael L.; Maly, Kurt; Thibodeau, Kenneth; Thaller, Manfred

    2001-01-01

    These articles describe the experiences of the Johns Hopkins University library in digitizing their collection of sheet music; motivation for buckets, Smart Object, Dumb Archive (SODA) and the Open Archives Initiative (OAI), and initial experiences using them in digital library (DL) testbeds; requirements for archival institutions, the National…

  14. An ontology based information system for the management of institutional repository's collections

    NASA Astrophysics Data System (ADS)

    Tsolakidis, A.; Kakoulidis, P.; Skourlas, C.

    2015-02-01

    In this paper we discuss a simple methodological approach to create, and customize institutional repositories for the domain of the technological education. The use of the open source software platform of DSpace is proposed to build up the repository application and provide access to digital resources including research papers, dissertations, administrative documents, educational material, etc. Also the use of owl ontologies is proposed for indexing and accessing the various, heterogeneous items stored in the repository. Customization and operation of a platform for the selection and use of terms or parts of similar existing owl ontologies is also described. This platform could be based on the open source software Protégé that supports owl, is widely used, and also supports visualization, SPARQL etc. The combined use of the owl platform and the DSpace repository form a basis for creating customized ontologies, accommodating the semantic metadata of items and facilitating searching.

  15. An open-source laser electronics suite

    NASA Astrophysics Data System (ADS)

    Pisenti, Neal C.; Reschovsky, Benjamin J.; Barker, Daniel S.; Restelli, Alessandro; Campbell, Gretchen K.

    2016-05-01

    We present an integrated set of open-source electronics for controlling external-cavity diode lasers and other instruments in the laboratory. The complete package includes a low-noise circuit for driving high-voltage piezoelectric actuators, an ultra-stable current controller based on the design of, and a high-performance, multi-channel temperature controller capable of driving thermo-electric coolers or resistive heaters. Each circuit (with the exception of the temperature controller) is designed to fit in a Eurocard rack equipped with a low-noise linear power supply capable of driving up to 5 A at +/- 15 V. A custom backplane allows signals to be shared between modules, and a digital communication bus makes the entire rack addressable by external control software over TCP/IP. The modular architecture makes it easy for additional circuits to be designed and integrated with existing electronics, providing a low-cost, customizable alternative to commercial systems without sacrificing performance.

  16. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  17. Mapping urban green open space in Bontang city using QGIS and cloud computing

    NASA Astrophysics Data System (ADS)

    Agus, F.; Ramadiani; Silalahi, W.; Armanda, A.; Kusnandar

    2018-04-01

    Digital mapping techniques are available freely and openly so that map-based application development is easier, faster and cheaper. A rapid development of Cloud Computing Geographic Information System makes this system can help the needs of the community for the provision of geospatial information online. The presence of urban Green Open Space (GOS) provide great benefits as an oxygen supplier, carbon-binding agent and can contribute to providing comfort and beauty of city life. This study aims to propose a platform application of GIS Cloud Computing (CC) of Bontang City GOS mapping. The GIS-CC platform uses the basic map available that’s free and open source. The research used survey method to collect GOS data obtained from Bontang City Government, while application developing works Quantum GIS-CC. The result section describes the existence of GOS Bontang City and the design of GOS mapping application.

  18. "Digital Futures in Teacher Education": Exploring Open Approaches towards Digital Literacy

    ERIC Educational Resources Information Center

    Gruszczynska, Anna; Merchant, Guy; Pountney, Richard,

    2013-01-01

    This paper reports the findings of a project "Digital Futures in Teacher Education" (DeFT) undertaken as part of the third phase of the Joint Information Systems Committee (JISC) UK Open Educational Resources (OER) programme. It builds on previous work (Gruszczynska and Pountney, 2012, 2013) that has addressed attempts to embed OER…

  19. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration

    PubMed Central

    Dryden, Michael D. M.; Wheeler, Aaron R.

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100

  20. Possibility of reconstruction of dental plaster cast from 3D digital study models

    PubMed Central

    2013-01-01

    Objectives To compare traditional plaster casts, digital models and 3D printed copies of dental plaster casts based on various criteria. To determine whether 3D printed copies obtained using open source system RepRap can replace traditional plaster casts in dental practice. To compare and contrast the qualities of two possible 3D printing options – open source system RepRap and commercially available 3D printing. Design and settings A method comparison study on 10 dental plaster casts from the Orthodontic department, Department of Stomatology, 2nd medical Faulty, Charles University Prague, Czech Republic. Material and methods Each of 10 plaster casts were scanned by inEos Blue scanner and the printed on 3D printer RepRap [10 models] and ProJet HD3000 3D printer [1 model]. Linear measurements between selected points on the dental arches of upper and lower jaws on plaster casts and its 3D copy were recorded and statistically analyzed. Results 3D printed copies have many advantages over traditional plaster casts. The precision and accuracy of the RepRap 3D printed copies of plaster casts were confirmed based on the statistical analysis. Although the commercially available 3D printing enables to print more details than the RepRap system, it is expensive and for the purpose of clinical use can be replaced by the cheaper prints obtained from RepRap printed copies. Conclusions Scanning of the traditional plaster casts to obtain a digital model offers a pragmatic approach. The scans can subsequently be used as a template to print the plaster casts as required. Using 3D printers can replace traditional plaster casts primarily due to their accuracy and price. PMID:23721330

  1. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  2. City model enrichment

    NASA Astrophysics Data System (ADS)

    Smart, Philip D.; Quinn, Jonathan A.; Jones, Christopher B.

    The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods.

  3. In Digital Age, Sunshine Laws Turn Hazy

    ERIC Educational Resources Information Center

    Fleming, Nora

    2013-01-01

    School board members are struggling to interpret laws that govern where and how they do business now that as many conversations take place digitally as they do face to face. As online and digital interactions increase, so too does public concern that officials have more opportunities to violate state open-meetings and open-records laws meant to…

  4. Direct measurement of the speed of sound using a microphone and a speaker

    NASA Astrophysics Data System (ADS)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-05-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is calculated. The result is in very good agreement with the reported value in the literature.

  5. JavaGenes Molecular Evolution

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Smith, David; Frank, Jeremy; Globus, Al; Crawford, James

    2007-01-01

    JavaGenes is a general-purpose, evolutionary software system written in Java. It implements several versions of a genetic algorithm, simulated annealing, stochastic hill climbing, and other search techniques. This software has been used to evolve molecules, atomic force field parameters, digital circuits, Earth Observing Satellite schedules, and antennas. This version differs from version 0.7.28 in that it includes the molecule evolution code and other improvements. Except for the antenna code, JaveGenes is available for NASA Open Source distribution.

  6. Method and apparatus for enhanced sensitivity filmless medical x-ray imaging, including three-dimensional imaging

    DOEpatents

    Parker, S.

    1995-10-24

    A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z{sub 1} above upper collimator plane, distance z{sub 2} above the lower collimator plane, and distance z{sub 3} above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v{sub 1}, v{sub 2}, v{sub 3} proportional to z{sub 1}, z{sub 2} and z{sub 3}, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site. 5 figs.

  7. Method and apparatus for enhanced sensitivity filmless medical x-ray imaging, including three-dimensional imaging

    DOEpatents

    Parker, Sherwood

    1995-01-01

    A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z.sub.1 above upper collimator plane, distance z.sub.2 above the lower collimator plane, and distance z.sub.3 above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v.sub.1, v.sub.2, v.sub.3 proportional to z.sub.1, z.sub.2 and z.sub.3, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site.

  8. Development of ultra-high temperature material characterization capabilities using digital image correlation analysis

    NASA Astrophysics Data System (ADS)

    Cline, Julia Elaine

    2011-12-01

    Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.

  9. HEPData: a repository for high energy physics data

    NASA Astrophysics Data System (ADS)

    Maguire, Eamonn; Heinrich, Lukas; Watt, Graeme

    2017-10-01

    The Durham High Energy Physics Database (HEPData) has been built up over the past four decades as a unique open-access repository for scattering data from experimental particle physics papers. It comprises data points underlying several thousand publications. Over the last two years, the HEPData software has been completely rewritten using modern computing technologies as an overlay on the Invenio v3 digital library framework. The software is open source with the new site available at https://hepdata.net now replacing the previous site at http://hepdata.cedar.ac.uk. In this write-up, we describe the development of the new site and explain some of the advantages it offers over the previous platform.

  10. Repositories for Deep, Dark, and Offline Data - Building Grey Literature Repositories and Discoverability

    NASA Astrophysics Data System (ADS)

    Keane, C. M.; Tahirkheli, S.

    2017-12-01

    Data repositories, especially in the geosciences, have been focused on the management of large quantities of born-digital data and facilitating its discovery and use. Unfortunately, born-digital data, even with its immense scale today, represents only the most recent data acquisitions, leaving a large proportion of the historical data record of the science "out in the cold." Additionally, the data record in the peer-reviewed literature, whether captured directly in the literature or through the journal data archive, represents only a fraction of the reliable data collected in the geosciences. Federal and state agencies, state surveys, and private companies, collect vast amounts of geoscience information and data that is not only reliable and robust, but often the only data representative of specific spatial and temporal conditions. Likewise, even some academic publications, such as senior theses, are unique sources of data, but generally do not have wide discoverability nor guarantees of longevity. As more of these `grey' sources of information and data are born-digital, they become increasingly at risk for permanent loss, not to mention poor discoverability. Numerous studies have shown that grey literature across all disciplines, including geosciences, disappears at a rate of about 8% per year. AGI has been working to develop systems to both improve the discoverability and the preservation of the geoscience grey literature by coupling several open source platforms from the information science community. We will detail the rationale, the technical and legal frameworks for these systems, and the long-term strategies for improving access, use, and stability of these critical data sources.

  11. ASK-LDT 2.0: A Web-Based Graphical Tool for Authoring Learning Designs

    ERIC Educational Resources Information Center

    Zervas, Panagiotis; Fragkos, Konstantinos; Sampson, Demetrios G.

    2013-01-01

    During the last decade, Open Educational Resources (OERs) have gained increased attention for their potential to support open access, sharing and reuse of digital educational resources. Therefore, a large amount of digital educational resources have become available worldwide through web-based open access repositories which are referred to as…

  12. Validation of a semi-automatic protocol for the assessment of the tear meniscus central area based on open-source software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.

    2017-08-01

    Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p < 0.001). Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.

  13. A multi-purpose open-source triggering platform for magnetic resonance

    NASA Astrophysics Data System (ADS)

    Ruytenberg, T.; Webb, A. G.; Beenakker, J. W. M.

    2014-10-01

    Many MR scans need to be synchronised with external events such as the cardiac or respiratory cycles. For common physiological functions commercial trigger equipment exists, but for more experimental inputs these are not available. This paper describes the design of a multi-purpose open-source trigger platform for MR systems. The heart of the system is an open-source Arduino Due microcontroller. This microcontroller samples an analogue input and digitally processes these data to determine the trigger. The output of the microcontroller is programmed to mimic a physiological signal which is fed into the electrocardiogram (ECG) or pulse oximeter port of MR scanner. The microcontroller is connected to a Bluetooth dongle that allows wireless monitoring and control outside the scanner room. This device can be programmed to generate a trigger based on various types of input. As one example, this paper describes how it can be used as an acoustic cardiac triggering unit. For this, a plastic stethoscope is connected to a microphone which is used as an input for the system. This test setup was used to acquire retrospectively-triggered cardiac scans in ten volunteers. Analysis showed that this platform produces a reliable trigger (>99% triggers are correct) with a small average 8 ms variation between the exact trigger points.

  14. A multi-purpose open-source triggering platform for magnetic resonance.

    PubMed

    Ruytenberg, T; Webb, A G; Beenakker, J W M

    2014-10-01

    Many MR scans need to be synchronised with external events such as the cardiac or respiratory cycles. For common physiological functions commercial trigger equipment exists, but for more experimental inputs these are not available. This paper describes the design of a multi-purpose open-source trigger platform for MR systems. The heart of the system is an open-source Arduino Due microcontroller. This microcontroller samples an analogue input and digitally processes these data to determine the trigger. The output of the microcontroller is programmed to mimic a physiological signal which is fed into the electrocardiogram (ECG) or pulse oximeter port of MR scanner. The microcontroller is connected to a Bluetooth dongle that allows wireless monitoring and control outside the scanner room. This device can be programmed to generate a trigger based on various types of input. As one example, this paper describes how it can be used as an acoustic cardiac triggering unit. For this, a plastic stethoscope is connected to a microphone which is used as an input for the system. This test setup was used to acquire retrospectively-triggered cardiac scans in ten volunteers. Analysis showed that this platform produces a reliable trigger (>99% triggers are correct) with a small average 8 ms variation between the exact trigger points. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. CognitionMaster: an object-based image analysis framework

    PubMed Central

    2013-01-01

    Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542

  16. The SciELO Brazilian Scientific Journal Gateway and Open Archives; Usability of Hypermedia Educational e-Books; Building Upon the MyLibrary Concept To Better Meet the Information Needs of College Students; Open Archives and UK Institutions; The Utah Digital Newspapers Project; Examples of Practical Digital Libraries.

    ERIC Educational Resources Information Center

    Marcondes, Carlos Henrique; Sayao, Luis Fernando; Diaz, Paloma; Gibbons, Susan; Pinfield, Stephen; Kenning, Arlitsch; Edge, Karen; Yapp, L.; Witten, Ian H.

    2003-01-01

    Includes six articles that focus on practical uses of technologies developed from digital library research in the areas of education and scholarship reflecting the international impact of digital library research initiatives. Includes the Scientific Electronic Library Online (SciELO) (Brazil); the National Science Foundation (NSF) (US); the Joint…

  17. Using open source data for flood risk mapping and management in Brazil

    NASA Astrophysics Data System (ADS)

    Whitley, Alison; Malloy, James; Chirouze, Manuel

    2013-04-01

    Whitley, A., Malloy, J. and Chirouze, M. Worldwide the frequency and severity of major natural disasters, particularly flooding, has increased. Concurrently, countries such as Brazil are experiencing rapid socio-economic development with growing and increasingly concentrated populations, particularly in urban areas. Hence, it is unsurprising that Brazil has experienced a number of major floods in the past 30 years such as the January 2011 floods which killed 900 people and resulted in significant economic losses of approximately 1 billion US dollars. Understanding, mitigating against and even preventing flood risk is high priority. There is a demand for flood models in many developing economies worldwide for a range of uses including risk management, emergency planning and provision of insurance solutions. However, developing them can be expensive. With an increasing supply of freely-available, open source data, the costs can be significantly reduced, making the tools required for natural hazard risk assessment more accessible. By presenting a flood model developed for eight urban areas of Brazil as part of a collaboration between JBA Risk Management and Guy Carpenter, we explore the value of open source data and demonstrate its usability in a business context within the insurance industry. We begin by detailing the open source data available and compare its suitability to commercially-available equivalents for datasets including digital terrain models and river gauge records. We present flood simulation outputs in order to demonstrate the impact of the choice of dataset on the results obtained and its use in a business context. Via use of the 2D hydraulic model JFlow+, our examples also show how advanced modelling techniques can be used on relatively crude datasets to obtain robust and good quality results. In combination with accessible, standard specification GPU technology and open source data, use of JFlow+ has enabled us to produce large-scale hazard maps suitable for business use and emergency planning such as those we show for Brazil.

  18. Digital Badging at The Open University: Recognition for Informal Learning

    ERIC Educational Resources Information Center

    Law, Patrina

    2015-01-01

    Awarding badges to recognise achievement is not a new development. Digital badging now offers new ways to recognise learning and motivate learners, providing evidence of skills and achievements in a variety of formal and informal settings. Badged open courses (BOCs) were piloted in various forms by the Open University (OU) in 2013 to provide a…

  19. Digital Geology from field to 3D modelling and Google Earth virtual environment: methods and goals from the Furlo Gorge (Northern Apennines - Italy)

    NASA Astrophysics Data System (ADS)

    De Donatis, Mauro; Susini, Sara

    2014-05-01

    A new map of the Furlo Gorge was surveyed and elaborated in a digital way. In every step of work we used digital tools as mobile GIS and 3D modelling software. Phase 1st Starting in the lab, planning the field project development, base cartography, forms and data base were designed in the way we thought was the best for collecting and store data in order of producing a digital n­-dimensional map. Bedding attitudes, outcrops sketches and description, stratigraphic logs, structural features and other informations were collected and organised in a structured database using rugged tablet PC, GPS receiver, digital cameras and later also an Android smartphone with some survey apps in-­house developed. A new mobile GIS (BeeGIS) was developed starting from an open source GIS (uDig): a number of tools like GPS connection, pen drawing annotations, geonotes, fieldbook, photo synchronization and geotagging were originally designed. Phase 2nd After some month of digital field work, all the informations were elaborated for drawing a geologic map in GIS environment. For that we use both commercial (ArcGIS) and open source (gvSig, QGIS, uDig) without big technical problems. Phase 3rd When we get to the step of building a 3D model (using 3DMove), passing trough the assisted drawing of cross-­sections (2DMove), we discovered a number of problems in the interpretation of geological structures (thrusts, normal faults) and more in the interpretation of stratigraphic thickness and boundaries and their relationships with topography. Phase 4th Before an "on­-armchair" redrawing of map, we decide to go back to the field and check directly what was wrong. Two main vantages came from this: (1) the mistakes we found could be reinterpreted and corrected directly in the field having all digital tools we need; (2) previous interpretations could be stored in GIS layers keeping memory of the previous work (also mistakes). Phase 5th A 3D model built with 3D Move is already almost self­-consistent in showing the structural features of the study area. The work was not so straightforward, but the result is more then satisfying, even if some limitations were not solved (i.e. visualisation of bedding attitudes). Geological maps are fundamental for knowledge transfer among experts but, if combined with the innovative digital methods from survey to 3D model, this knowledges could reach a much larger number of people, allowing a cultural growth and the establishment of a larger awareness of the Earth and Environment.

  20. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  1. Digital map of aquifer boundary for the High Plains aquifer in parts of Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming

    USGS Publications Warehouse

    Qi, Sharon

    2010-01-01

    This digital data set represents the extent of the High Plains aquifer in the central United States. The extent of the High Plains aquifer covers 174,000 square miles in eight states: Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming. This data set represents a compilation of information from digital and paper sources and personal communication. This boundary is an update to the boundary published in U.S. Geological Survey Professional Paper 1400-B, and this report supersedes Open-File Report 99-267. The purpose of this data set is to refine and update the extent of the High Plains aquifer based on currently available information. This data set represents a compilation of arcs from a variety of sources and scales that represent the 174,000 square-mile extent of the High Plains aquifer within the eight states. Where updated information was not available, the original boundary extent defined by OFR 99-267 was retained. The citations for the sources in each State are listed in the 00README.txt file. The boundary also contains internal polygons, or 'islands', that represent the areas within the aquifer boundary where the aquifer is not present due to erosion or non-deposition. The datasets that pertain to this report can be found on the U.S. Geological Survey's NSDI (National Spatial Data Infrastructure) Node, the links are provided on the sidebar.

  2. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  3. NASA's Big Earth Data Initiative Accomplishments

    NASA Technical Reports Server (NTRS)

    Klene, Stephan A.; Pauli, Elisheva; Pressley, Natalie N.; Cechini, Matthew F.; McInerney, Mark

    2017-01-01

    The goal of NASA's effort for BEDI is to improve the usability, discoverability, and accessibility of Earth Observation data in support of societal benefit areas. Accomplishments: In support of BEDI goals, datasets have been entered into Common Metadata Repository(CMR), made available via the Open-source Project for a Network Data Access Protocol (OPeNDAP), have a Digital Object Identifier (DOI) registered for the dataset, and to support fast visualization many layers have been added in to the Global Imagery Browse Services (GIBS).

  4. NASA's Big Earth Data Initiative Accomplishments

    NASA Astrophysics Data System (ADS)

    Klene, S. A.; Pauli, E.; Pressley, N. N.; Cechini, M. F.; McInerney, M.

    2017-12-01

    The goal of NASA's effort for BEDI is to improve the usability, discoverability, and accessibility of Earth Observation data in support of societal benefit areas. Accomplishments: In support of BEDI goals, datasets have been entered into Common Metadata Repository(CMR), made available via the Open-source Project for a Network Data Access Protocol (OPeNDAP), have a Digital Object Identifier (DOI) registered for the dataset, and to support fast visualization many layers have been added in to the Global Imagery Browse Service(GIBS)

  5. IRAF and STSDAS under the new ALPHA architecture

    NASA Technical Reports Server (NTRS)

    Zarate, N. R.

    1992-01-01

    Digital's next generation RISC architecture, known as ALPHA, presents many IRAF system portability questions and challenges to both site managers and end users. DEC promises to support the ULTRIX, VMS, and OSF/1 operating systems, which should allow IRAF to be ported to the new architecture at either the program executable level (using VEST), or at the source level, where IRAF can be tuned for greater performance. These notes highlight some of the details of porting IRAF to OpenVMS on the ALPHA architecture.

  6. The AAPT/ComPADRE Digital Library: Supporting Physics Education at All Levels

    NASA Astrophysics Data System (ADS)

    Mason, Bruce

    For more than a decade, the AAPT/ComPADRE Digital Library has been providing online resources, tools, and services that support broad communities of physics faculty and physics education researchers. This online library provides vetted resources for teachers and students, an environment for authors and developers to share their work, and the collaboration tools for a diverse set of users. This talk will focus on the recent collaborations and developments being hosted on or developed with ComPADRE. Examples include PhysPort, making the tools and resources developed by physics education researchers more accessible, the Open Source Physics project, expanding the use of numerical modeling at all levels of physics education, and PICUP, a community for those promoting computation in the physics curriculum. NSF-0435336, 0532798, 0840768, 0937836.

  7. Digital Geologic Map of the Wallace 1:100,000 Quadrangle, Idaho

    USGS Publications Warehouse

    Lewis, Reed S.; Burmester, Russell F.; McFaddan, Mark D.; Derkey, Pamela D.; Oblad, Jon R.

    1999-01-01

    The geology of the Wallace 1:100,000 quadrangle, Idaho was compiled by Reed S. Lewis in 1997 primarily from published materials including 1983 data from Foster, Harrison's unpublished mapping done from 1975 to 1985, Hietenan's 1963, 1967, 1968, and 1984 mapping, Hobbs and others 1965 mapping, and Vance's 1981 mapping, supplemented by eight weeks of field mapping by Reed S. Lewis, Russell F. Burmester, and Mark D. McFaddan in 1997 and 1998. This geologic map information was inked onto a 1:100,000-scale greenline mylar of the topographic base map for input into a geographic information system (GIS). The resulting digital geologic map GIS can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000). The map area is located in north Idaho. The primary sources of map data are shown in figure 2 and additional sources are shown in figure 3. This open-file report describes the geologic map units, the methods used to convert the geologic map data into a digital format, the Arc/Info GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. Mapping and compilation was completed by the Idaho Geological Survey under contract with the U.S. Geological Survey (USGS) office in Spokane, Washington. The authors would like to acknowledge the help of the following field assistants: Josh Goodman, Yvonne Issak, Jeremy Johnson and Kevin Myer. Don Winston provided help with our ongoing study of Belt stratigraphy, and Tom Frost assisted with logistical problems and sample collection. Manuscript reviews by Steve Box, Tom Frost, and Brian White are greatly appreciated. We wish to thank Karen S. Bolm of the USGS for reviewing the digital files.

  8. A GPU accelerated PDF transparency engine

    NASA Astrophysics Data System (ADS)

    Recker, John; Lin, I.-Jong; Tastl, Ingeborg

    2011-01-01

    As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.

  9. A multi-source dataset of urban life in the city of Milan and the Province of Trentino.

    PubMed

    Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno

    2015-01-01

    The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.

  10. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  11. A multi-source dataset of urban life in the city of Milan and the Province of Trentino

    NASA Astrophysics Data System (ADS)

    Barlacchi, Gianni; de Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno

    2015-10-01

    The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.

  12. A multi-source dataset of urban life in the city of Milan and the Province of Trentino

    PubMed Central

    Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno

    2015-01-01

    The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others. PMID:26528394

  13. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    PubMed

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  14. High Sensitive Scintillation Observations At Very Low Frequencies

    NASA Astrophysics Data System (ADS)

    Konovalenko, A. A.; Falkovich, I. S.; Kalinichenko, N. N.; Olyak, M. R.; Lecacheux, A.; Rosolen, C.; Bougeret, J.-L.; Rucker, H. O.; Tokarev, Yu.

    The observation of interplanetary scintillations of compact radio sources is powerful method of solar wind diagnostics. This method is developed mainly at decimeter- meter wavelengths. New possibilities are opened at extremely low frequencies (decameter waves) especially at large elongations. Now this approach is being actively developed using high effective decameter antennas UTR-2, URAN and Nancay Decameter Array. New class of back-end facility like high dynamic range, high resolution digital spectral processors, as well as dynamic spectra determination ideology give us new opportunities for distinguishing of the ionospheric and interplanetary scintillations and for observations of large number of radio sources, whith different angular sizes and elongations, even for the cases of rather weak objects.

  15. Exploring TechQuests Through Open Source and Tools That Inspire Digital Natives

    NASA Astrophysics Data System (ADS)

    Hayden, K.; Ouyang, Y.; Kilb, D.; Taylor, N.; Krey, B.

    2008-12-01

    "There is little doubt that K-12 students need to understand and appreciate the Earth on which they live. They can achieve this understanding only if their teachers are well prepared". Dan Barstow, Director of Center for Earth and Space Science Education at TERC. The approach of San Diego County's Cyberinfrastructure Training, Education, Advancement, and Mentoring (SD Cyber-TEAM) project is to build understandings of Earth systems for middle school teachers and students through a collaborative that has engaged the scientific community in the use of cyber-based tools and environments for learning. The SD Cyber-TEAM has used Moodle, an open source management system with social networking tools, that engage digital native students and their teachers in collaboration and sharing of ideas and research related to Earth science. Teachers participate in on-line professional dialog through chat, wikis, blogs, forums, journals and other tools and choose the tools that will best fit their classroom. The use of Moodle during the Summer Cyber Academy developed a cyber-collaboratory environment where teaching strategies were discussed, supported and actualized by participants. These experiences supported digital immigrants (teachers) in adapting teaching strategies using technologies that are most attractive and familiar to students (digital natives). A new study by the National School Boards Association and Grunwald Associates LLC indicated that "the online behaviors of U.S. teens and 'tweens shows that 96 percent of students with online access use social networking technologies, such as chatting, text messaging, blogging, and visiting online communities such as Facebook, MySpace, and Webkinz". While SD Cyber-TEAM teachers are implementing TechQuests in classrooms they use these social networking elements to capture student interest and address the needs of digital natives. Through the Moodle environment, teachers have explored a variety of learning objects called TechQuests, to support classroom instruction previously outlined through a textbook. Project classrooms have participated in videoconferences over high-speed networks and through satellite connections with experts in the field investigating scientific data found in the CA State Park of Anza Borrego. Other engaging tools include: An Interactive Epicenter Locator Tool developed through the project in collaboration with the Scripps Institution of Oceanography to engage students in the use of data to determine earthquake epicenters during hands on investigations, and a TechQuest activity where GoogleEarth allows students to explore geographic locations and scientific data.

  16. Diagnostic pathology in 2012: development of digital pathology in an open access journal

    PubMed Central

    2013-01-01

    Abstract Herein we describe and interpret the digital world of diagnostic surgical pathology, and take the in Pathology leading Open Access Journal Diagnostic Pathology as example. Virtual slide http://www.diagnosticpathology.diagnomx.eu/vs/1944221953867351 PMID:23305209

  17. A novel design for sap flux data acquisition in large research plots using open source components

    NASA Astrophysics Data System (ADS)

    Hawthorne, D. A.; Oishi, A. C.

    2017-12-01

    Sap flux sensors are a widely-used tool for estimating in-situ, tree-level transpiration rates. These probes are installed in the stems of multiple trees within a study area and are typically left in place throughout the year. Sensors vary in their design and theory of operation, but all require electrical power for a heating element and produce at least one analog signal that must be digitized for storage. There are two topologies traditionally adopted to energize these sensors and gather the data from them. In one, a single data logger and power source are used. Dedicated cables radiate out from the logger to supply power to each of the probes and retrieve analog signals. In the other layout, a standalone data logger is located at each monitored tree. Batteries must then be distributed throughout the plot to service these loggers. We present a hybrid solution based on industrial control systems that employs a central data logger and battery, but co-locates digitizing hardware with the sensors at each tree. Each hardware node is able to communicate and share power over wire links with neighboring nodes. The resulting network provides a fault-tolerant path between the logger and each sensor. The approach is optimized to limit disturbance of the study plot, protect signal integrity and to enhance system reliability. This open-source implementation is built on the Arduino micro-controller system and employs RS485 and Modbus communications protocols. It is supported by laptop based management software coded in Python. The system is designed to be readily fabricated and programmed by non-experts. It works with a variety of sap-flux measurement techniques and it is able to interface to additional environmental sensors.

  18. A Stigmergy Collaboration Approach in the Open Source Software Developer Community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Pullum, Laura L; Treadwell, Jim N

    2009-01-01

    The communication model of some self-organized online communities is significantly different from the traditional social network based community. It is problematic to use social network analysis to analyze the collaboration structure and emergent behaviors in these communities because these communities lack peer-to-peer connections. Stigmergy theory provides an explanation of the collaboration model of these communities. In this research, we present a stigmergy approach for building an agent-based simulation to simulate the collaboration model in the open source software (OSS) developer community. We used a group of actors who collaborate on OSS projects through forums as our frame of reference andmore » investigated how the choices actors make in contributing their work on the projects determines the global status of the whole OSS project. In our simulation, the forum posts serve as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing the developer agents behavior selection probability.« less

  19. Apis - a Digital Inventory of Archaeological Heritage Based on Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Doneus, M.; Forwagner, U.; Liem, J.; Sevara, C.

    2017-08-01

    Heritage managers are in need of dynamic spatial inventories of archaeological and cultural heritage that provide them with multipurpose tools to interactively understand information about archaeological heritage within its landscape context. Specifically, linking site information with the respective non-invasive prospection data is of increasing importance as it allows for the assessment of inherent uncertainties related to the use and interpretation of remote sensing data by the educated and knowledgeable heritage manager. APIS, the archaeological prospection information system of the Aerial Archive of the University of Vienna, is specifically designed to meet these needs. It provides storage and easy access to all data concerning aerial photographs and archaeological sites through a single GIS-based application. Furthermore, APIS has been developed in an open source environment, which allows it to be freely distributed and modified. This combination in one single open source system facilitates an easy workflow for data management, interpretation, storage, and retrieval. APIS and a sample dataset will be released free of charge under creative commons license in near future.

  20. An open-source, extensible system for laboratory timing and control

    NASA Astrophysics Data System (ADS)

    Gaskell, Peter E.; Thorn, Jeremy J.; Alba, Sequoia; Steck, Daniel A.

    2009-11-01

    We describe a simple system for timing and control, which provides control of analog, digital, and radio-frequency signals. Our system differs from most common laboratory setups in that it is open source, built from off-the-shelf components, synchronized to a common and accurate clock, and connected over an Ethernet network. A simple bus architecture facilitates creating new and specialized devices with only moderate experience in circuit design. Each device operates independently, requiring only an Ethernet network connection to the controlling computer, a clock signal, and a trigger signal. This makes the system highly robust and scalable. The devices can all be connected to a single external clock, allowing synchronous operation of a large number of devices for situations requiring precise timing of many parallel control and acquisition channels. Provided an accurate enough clock, these devices are capable of triggering events separated by one day with near-microsecond precision. We have achieved precisions of ˜0.1 ppb (parts per 109) over 16 s.

  1. Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.

    PubMed

    Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian

    2015-12-16

    Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.

  2. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot

    PubMed Central

    Kitson, Philip J; Glatzel, Stefan

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic ‘programs’ which can run on similar low cost, user-constructed robotic platforms towards an ‘open-source’ regime in the area of chemical synthesis. PMID:28144350

  3. Digital geologic map of the Coeur d'Alene 1:100,000 quadrangle, Idaho and Montana

    USGS Publications Warehouse

    digital compilation by Munts, Steven R.

    2000-01-01

    Between 1961 and 1969, Alan Griggs and others conducted fieldwork to prepare a geologic map of the Spokane 1:250,000 map (Griggs, 1973). Their field observations were posted on paper copies of 15-minute quadrangle maps. In 1999, the USGS contracted with the Idaho Geological Survey to prepare a digital version of the Coeur d’Alene 1:100,000 quadrangle. To facilitate this work, the USGS obtained the field maps prepared by Griggs and others from the USGS Field Records Library in Denver, Colorado. The Idaho Geological Survey (IGS) digitized these maps and used them in their mapping program. The mapping focused on field checks to resolve problems in poorly known areas and in areas of disagreement between adjoining maps. The IGS is currently in the process of preparing a final digital spatial database for the Coeur d’Alene 1:100,000 quadrangle. However, there was immediate need for a digital version of the geologic map of the Coeur d’Alene 1:100,000 quadrangle and the data from the field sheets along with several other sources were assembled to produce this interim product. This interim product is the digital geologic map of the Coeur d’Alene 1:100,000 quadrangle, Idaho and Montana. It was compiled from the preliminary digital files prepared by the Idaho Geological, and supplemented by data from Griggs (1973) and from digital databases by Bookstrom and others (1999) and Derkey and others (1996). The resulting digital geologic map (GIS) database can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000). The digital geologic map graphics (of00-135_map.pdf) that are provided are representations of the digital database. The map area is located in north Idaho. This open-file report describes the geologic map units, the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet.

  4. Massive stereo-based DTM production for Mars on cloud computers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  5. The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.

    2017-12-01

    The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH RepositoryA repository is under development for NCAR datasets currently not in existing lab-managed archives. The DASH repository will be under NCAR governance and meet Trustworthy Repositories Audit & Certification (TRAC) requirements. This poster will highlight the processes, lessons learned, and current status of the DASH effort at NCAR.

  6. Open Educational Resources

    ERIC Educational Resources Information Center

    McShane, Michael Q.

    2017-01-01

    While digital products have made significant inroads into the educational resources market, textbooks and other print materials still command about 60 percent of sales. But whether print or digital, all of these commercial offerings now face threats from a burgeoning effort to promote "open" resources for education--that is, materials…

  7. The Role of Citizen Science in Risk Mitigation and Disaster Response: A Case Study of 2015 Nepalese Earthquake Using OpenStreetMap

    NASA Astrophysics Data System (ADS)

    Rieger, C.; Byrne, J. M.

    2015-12-01

    Citizen science includes networks of ordinary people acting as sensors, observing and recording information for science. OpenStreetMap is one such sensor network which empowers citizens to collaboratively produce a global picture from free geographic information. The success of this open source software is extended by the development of freely used open databases for the user community. Participating citizens do not require a high level of skill. Final results are processed by professionals following quality assurance protocols before map information is released. OpenStreetMap is not only the cheapest source of timely maps in many cases but also often the only source. This is particularly true in developing countries. Emergency responses to the recent earthquake in Nepal illustrates the value for rapidly updated geographical information. This includes emergency management, damage assessment, post-disaster response, and future risk mitigation. Local disaster conditions (landslides, road closings, bridge failures, etc.) were documented for local aid workers by citizen scientists working remotely. Satellites and drones provide digital imagery of the disaster zone and OpenStreetMap participants shared the data from locations around the globe. For the Nepal earthquake, OpenStreetMap provided a team of volunteers on the ground through their Humanitarian OpenStreetMap Team (HOT) which contribute data to the disaster response through smartphones and laptops. This, combined with global citizen science efforts, provided immediate geographically useful maps to assist aid workers, including the Red Cross and Canadian DART Team, and the Nepalese government. As of August 2014, almost 1.7 million users provided over 2.5 billion edits to the OpenStreetMap map database. Due to the increased usage of smartphones, GPS-enabled devices, and the growing participation in citizen science projects, data gathering is proving an effective way to contribute as a global citizen. This paper aims to describe the significance of citizen participation in the case of the Nepal earthquake using OpenStreetMap to respond to disasters as well as its role in future risk mitigation.

  8. Source-gated transistors for order-of-magnitude performance improvements in thin-film digital circuits

    NASA Astrophysics Data System (ADS)

    Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.

    2014-03-01

    Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration.

  9. Source-gated transistors for order-of-magnitude performance improvements in thin-film digital circuits

    PubMed Central

    Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.

    2014-01-01

    Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration. PMID:24599023

  10. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    USGS Publications Warehouse

    Frigeri, A.; Hare, T.; Neteler, M.; Coradini, A.; Federico, C.; Orosei, R.

    2011-01-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey, represents the state of the art for processing planetary remote sensing data, from the raw unprocessed state to the map projected product. The second, the Geographic Resources Analysis Support System (GRASS) is a Geographic Information System developed by an international team of developers, and one of the core projects promoted by the Open Source Geospatial Foundation (OSGeo). We have worked on enabling the combined use of these software systems throughout the set-up of a common user interface, the unification of the cartographic reference system nomenclature and the minimization of data conversion. Both software packages are distributed with free open source licenses, as well as the source code, scripts and configuration files hereafter presented. In this paper we describe our work done to merge these working environments into a common one, where the user benefits from functionalities of both systems without the need to switch or transfer data from one software suite to the other one. Thereafter we provide an example of its usage in the handling of planetary data and the crafting of a digital geologic map. ?? 2010 Elsevier Ltd. All rights reserved.

  11. Free and open-source software application for the evaluation of coronary computed tomography angiography images.

    PubMed

    Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de

    2012-10-01

    The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.

  12. Evaluation of DICOM viewer software for workflow integration in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.

    2015-03-01

    The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.

  13. Increasing the value of geospatial informatics with open approaches for Big Data

    NASA Astrophysics Data System (ADS)

    Percivall, G.; Bermudez, L. E.

    2017-12-01

    Open approaches to big data provide geoscientists with new capabilities to address problems of unmatched size and complexity. Consensus approaches for Big Geo Data have been addressed in multiple international workshops and testbeds organized by the Open Geospatial Consortium (OGC) in the past year. Participants came from government (NASA, ESA, USGS, NOAA, DOE); research (ORNL, NCSA, IU, JPL, CRIM, RENCI); industry (ESRI, Digital Globe, IBM, rasdaman); standards (JTC 1/NIST); and open source software communities. Results from the workshops and testbeds are documented in Testbed reports and a White Paper published by the OGC. The White Paper identifies the following set of use cases: Collection and Ingest: Remote sensed data processing; Data stream processing Prepare and Structure: SQL and NoSQL databases; Data linking; Feature identification Analytics and Visualization: Spatial-temporal analytics; Machine Learning; Data Exploration Modeling and Prediction: Integrated environmental models; Urban 4D models. Open implementations were developed in the Arctic Spatial Data Pilot using Discrete Global Grid Systems (DGGS) and in Testbeds using WPS and ESGF to publish climate predictions. Further development activities to advance open implementations of Big Geo Data include the following: Open Cloud Computing: Avoid vendor lock-in through API interoperability and Application portability. Open Source Extensions: Implement geospatial data representations in projects from Apache, Location Tech, and OSGeo. Investigate parallelization strategies for N-Dimensional spatial data. Geospatial Data Representations: Schemas to improve processing and analysis using geospatial concepts: Features, Coverages, DGGS. Use geospatial encodings like NetCDF and GeoPackge. Big Linked Geodata: Use linked data methods scaled to big geodata. Analysis Ready Data: Support "Download as last resort" and "Analytics as a service". Promote elements common to "datacubes."

  14. Opportunities and Challenges with Digital Open Badges

    ERIC Educational Resources Information Center

    Farmer, Tadd; West, Richard E.

    2016-01-01

    With increasing interest in competency and outcome-based education, and the blending of formal and informal learning, there is increasing need for credentials to match these learning paradigms. In this article, the authors discuss the benefits, challenges, and potential future directions for open digital badges--one potential alternative…

  15. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  16. An integrated and open source GIS environmental management system for a protected area in the south of Portugal

    NASA Astrophysics Data System (ADS)

    Teodoro, A.; Duarte, L.; Sillero, N.; Gonçalves, J. A.; Fonte, J.; Gonçalves-Seco, L.; Pinheiro da Luz, L. M.; dos Santos Beja, N. M. R.

    2015-10-01

    Herdade da Contenda (HC), located in Moura municipality, Beja district (Alentejo province) in the south of Portugal (southwestern Iberia Peninsula), is a national hunting area with 5270ha. The development of an integrated system that aims to make the management of the natural and cultural heritage resources will be very useful for an effective management of this area. This integrated system should include the physical characterization of the territory, natural conservation, land use and land management themes, as well the cultural heritage resources. This paper presents a new tool for an integrated environmental management system of the HC, which aims to produce maps under a GIS open source environment (QGIS). The application is composed by a single button which opens a window. The window is composed by twelve menus (File, DRASTIC, Forest Fire Risk, Revised Universal Soil Loss Equation (RUSLE), Bioclimatic Index, Cultural Heritage, Fauna and Flora, Ortofoto, Normalizes Difference Vegetation Index (NDVI), Digital Elevation Model (DEM), Land Use Land Cover Cover (LULC) and Help. Several inputs are requires to generate these maps, e.g. DEM, geologic information, soil map, hydraulic conductivity information, LULC map, vulnerability and economic information, NDVI. Six buttons were added to the toolbar which allows to manipulate the information in the map canvas: Zoom in, Zoom out, Pan, Print/Layout and Clear. This integrated and open source GIS environment management system was developed for the HC area, but could be easily adapted to other natural or protected area. Despite the lack of data, the methodology presented fulfills the objectives.

  17. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  18. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction.

    PubMed

    Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia

    2016-06-01

    To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. SWARM: A 32 GHz Correlator and VLBI Beamformer for the Submillimeter Array

    NASA Astrophysics Data System (ADS)

    Primiani, Rurik A.; Young, Kenneth H.; Young, André; Patel, Nimesh; Wilson, Robert W.; Vertatschitsch, Laura; Chitwood, Billie B.; Srinivasan, Ranjani; MacMahon, David; Weintroub, Jonathan

    2016-03-01

    A 32GHz bandwidth VLBI capable correlator and phased array has been designed and deployeda at the Smithsonian Astrophysical Observatory’s Submillimeter Array (SMA). The SMA Wideband Astronomical ROACH2 Machine (SWARM) integrates two instruments: a correlator with 140kHz spectral resolution across its full 32GHz band, used for connected interferometric observations, and a phased array summer used when the SMA participates as a station in the Event Horizon Telescope (EHT) very long baseline interferometry (VLBI) array. For each SWARM quadrant, Reconfigurable Open Architecture Computing Hardware (ROACH2) units shared under open-source from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) are equipped with a pair of ultra-fast analog-to-digital converters (ADCs), a field programmable gate array (FPGA) processor, and eight 10 Gigabit Ethernet (GbE) ports. A VLBI data recorder interface designated the SWARM digital back end, or SDBE, is implemented with a ninth ROACH2 per quadrant, feeding four Mark6 VLBI recorders with an aggregate recording rate of 64 Gbps. This paper describes the design and implementation of SWARM, as well as its deployment at SMA with reference to verification and science data.

  20. Climate Signals: An On-Line Digital Platform for Mapping Climate Change Impacts in Real Time

    NASA Astrophysics Data System (ADS)

    Cutting, H.

    2016-12-01

    Climate Signals is an on-line digital platform for cataloging and mapping the impacts of climate change. The CS platform specifies and details the chains of connections between greenhouse gas emissions and individual climate events. Currently in open-beta release, the platform is designed to to engage and serve the general public, news media, and policy-makers, particularly in real-time during extreme climate events. Climate Signals consists of a curated relational database of events and their links to climate change, a mapping engine, and a gallery of climate change monitors offering real-time data. For each event in the database, an infographic engine provides a custom attribution "tree" that illustrates the connections to climate change. In addition, links to key contextual resources are aggregated and curated for each event. All event records are fully annotated with detailed source citations and corresponding hyper links. The system of attribution used to link events to climate change in real-time is detailed here. This open-beta release is offered for public user testing and engagement. Launched in May 2016, the operation of this platform offers lessons for public engagement in climate change impacts.

  1. Precision global health in the digital age.

    PubMed

    Flahault, Antoine; Geissbuhler, Antoine; Guessous, Idris; Guérin, Philippe; Bolon, Isabelle; Salathé, Marcel; Escher, Gérard

    2017-04-19

    Precision global health is an approach similar to precision medicine, which facilitates, through innovation and technology, better targeting of public health interventions on a global scale, for the purpose of maximising their effectiveness and relevance. Illustrative examples include: the use of remote sensing data to fight vector-borne diseases; large databases of genomic sequences of foodborne pathogens helping to identify origins of outbreaks; social networks and internet search engines for tracking communicable diseases; cell phone data in humanitarian actions; drones to deliver healthcare services in remote and secluded areas. Open science and data sharing platforms are proposed for fostering international research programmes under fair, ethical and respectful conditions. Innovative education, such as massive open online courses or serious games, can promote wider access to training in public health and improving health literacy. The world is moving towards learning healthcare systems. Professionals are equipped with data collection and decision support devices. They share information, which are complemented by external sources, and analysed in real time using machine learning techniques. They allow for the early detection of anomalies, and eventually guide appropriate public health interventions. This article shows how information-driven approaches, enabled by digital technologies, can help improving global health with greater equity.

  2. JCE Digital Library Grand Opening

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 2004

    2004-01-01

    The National Science, Technology, Engineering and Mathematical Education Digital Library (NSDL), inaugurated in December 2002, is developed to promote science education on a comprehensive scale. The Journal of Chemical, Education (JCE) Digital Library, incorporated into NSDL, contains its own collections of digital resources for chemistry…

  3. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  4. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE PAGES

    Giera, Brian; Bukosky, Scott; Lee, Elaine; ...

    2018-01-23

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  5. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Bukosky, Scott; Lee, Elaine

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  6. X-Ray Backscatter Imaging for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel; Edwards, Talion; Toh, Chin

    2011-06-01

    Scatter x-ray imaging (SXI) is a real time, digital, x-ray backscatter imaging technique that allows radiographs to be taken from one side of an object. This x-ray backscatter imaging technique offers many advantages over conventional transmission radiography that include single-sided access and extremely low radiation fields compared to conventional open source industrial radiography. Examples of some applications include the detection of corrosion, foreign object debris, water intrusion, cracking, impact damage and leak detection in a variety of material such as aluminum, composites, honeycomb structures, and titanium.

  7. DSpace and customized controlled vocabularies

    NASA Astrophysics Data System (ADS)

    Skourlas, C.; Tsolakidis, A.; Kakoulidis, P.; Giannakopoulos, G.

    2015-02-01

    The open source platform of DSpace could be defined as a repository application used to provide access to digital resources. DSpace is installed and used by more than 1000 organizations worldwide. A predefined taxonomy of keyword, called the Controlled Vocabulary, can be used for describing and accessing the information items stored in the repository. In this paper, we describe how the users can create, and customize their own vocabularies. Various heterogeneous items, such as research papers, videos, articles and educational material of the repository, can be indexed in order to provide advanced search functionality using new controlled vocabularies.

  8. Modeling the Anisotropic Resolution and Noise Properties of Digital Breast Tomosynthesis Image Reconstructions

    DTIC Science & Technology

    2012-01-01

    DM system with a detector field -of-view (FOV) of 24 30 cm and a source-to- image distance of 70 cm measured at the midpoint of the chest wall. In...DNCs in frequency space have an opening angle spanning approximately -7.5° to +7.5° for measurements made near the midpoint of the chest wall. At...conference abstract). 18Ren B, Ruth C, Stein J, Smith A, Shaw I, Jing Z. Design and performance of the prototype full field breast tomosynthesis system

  9. WorldWide Telescope: A Newly Open Source Astronomy Visualization System

    NASA Astrophysics Data System (ADS)

    Fay, Jonathan; Roberts, Douglas A.

    2016-01-01

    After eight years of development by Microsoft Research, WorldWide Telescope (WWT) was made an open source project at the end of June 2015. WWT was motivated by the desire to put new surveys of objects, such as the Sloan Digital Sky Survey in the context of the night sky. The development of WWT under Microsoft started with the creation of a Windows desktop client that is widely used in various education, outreach and research projects. Using this, users can explore the data built into WWT as well as data that is loaded in. Beyond exploration, WWT can be used to create tours that present various datasets a narrative format.In the past two years, the team developed a collection of web controls, including an HTML5 web client, which contains much of the functionality of the Windows desktop client. The project under Microsoft has deep connections with several user communities such as education through the WWT Ambassadors program, http://wwtambassadors.org/ and with planetariums and museums such as the Adler Planetarium. WWT can also support research, including using WWT to visualize the Bones of the Milky Way and rich connections between WWT and the Astrophysical Data Systems (ADS, http://labs.adsabs.harvard.edu/adsabs/). One important new research connection is the use of WWT to create dynamic and potentially interactive supplements to journal articles, which have been created in 2015.Now WWT is an open source community lead project. The source code is available in GitHub (https://github.com/WorldWideTelescope). There is significant developer documentation on the website (http://worldwidetelescope.org/Developers/) and an extensive developer workshops (http://wwtworkshops.org/?tribe_events=wwt-developer-workshop) has taken place in the fall of 2015.Now that WWT is open source anyone who has the interest in the project can be a contributor. As important as helping out with coding, the project needs people interested in documentation, testing, training and other roles.

  10. MBAT: a scalable informatics system for unifying digital atlasing workflows.

    PubMed

    Lee, Daren; Ruffins, Seth; Ng, Queenie; Sane, Nikhil; Anderson, Steve; Toga, Arthur

    2010-12-22

    Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows.

  11. Regional Geologic Map of San Andreas and Related Faults in Carrizo Plain, Temblor, Caliente and La Panza Ranges and Vicinity, California; A Digital Database

    USGS Publications Warehouse

    Dibblee, T. W.; Digital database compiled by Graham, S. E.; Mahony, T.M.; Blissenbach, J.L.; Mariant, J.J.; Wentworth, C.M.

    1999-01-01

    This Open-File Report is a digital geologic map database. The report serves to introduce and describe the digital data. There is no paper map included in the Open-File Report. The report includes PostScript and PDF plot files that can be used to plot images of the geologic map sheet and explanation sheet. This digital map database is prepared from a previously published map by Dibblee (1973). The geologic map database delineates map units that are identified by general age, lithology, and clast size following the stratigraphic nomenclature of the U.S. Geological Survey. For descriptions of the units, their stratigraphic relations, and sources of geologic mapping, consult the explanation sheet (of99-14_4b.ps or of99-14_4d.pdf), or the original published paper map (Dibblee, 1973). The scale of the source map limits the spatial resolution (scale) of the database to 1:125,000 or smaller. For those interested in the geology of Carrizo Plain and vicinity who do not use an ARC/INFO compatible Geographic Information System (GIS), but would like to obtain a paper map and explanation, PDF and PostScript plot files containing map images of the data in the digital database, as well as PostScript and PDF plot files of the explanation sheet and explanatory text, have been included in the database package (please see the section 'Digital Plot Files', page 5). The PostScript plot files require a gzip utility to access them. For those without computer capability, we can provide users with the PostScript or PDF files on tape that can be taken to a vendor for plotting. Paper plots can also be ordered directly from the USGS (please see the section 'Obtaining Plots from USGS Open-File Services', page 5). The content and character of the database, methods of obtaining it, and processes of extracting the map database from the tar (tape archive) file are described herein. The map database itself, consisting of six ARC/INFO coverages, can be obtained over the Internet or by magnetic tape copy as described below. The database was compiled using ARC/INFO, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). The ARC/INFO coverages are stored in uncompressed ARC export format (ARC/INFO version 7.x). All data files have been compressed, and may be uncompressed with gzip, which is available free of charge over the Internet via links from the USGS Public Domain Software page (http://edcwww.cr.usgs.gov/doc/edchome/ndcdb/public.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView.

  12. The Possibilities and Limitations of Applying "Open Data" Principles in Schools

    ERIC Educational Resources Information Center

    Selwyn, Neil; Henderson, Michael; Chao, Shu-Hua

    2017-01-01

    Large quantities of data are now being generated, collated and processed within schools through computerised systems and other digital technologies. In response to growing concerns over the efficiency and equity of how these data are used, the concept of "open data" has emerged as a potential means of using digital technology to…

  13. Falcon: a highly flexible open-source software for closed-loop neuroscience.

    PubMed

    Ciliberti, Davide; Kloosterman, Fabian

    2017-08-01

    Closed-loop experiments provide unique insights into brain dynamics and function. To facilitate a wide range of closed-loop experiments, we created an open-source software platform that enables high-performance real-time processing of streaming experimental data. We wrote Falcon, a C++ multi-threaded software in which the user can load and execute an arbitrary processing graph. Each node of a Falcon graph is mapped to a single thread and nodes communicate with each other through thread-safe buffers. The framework allows for easy implementation of new processing nodes and data types. Falcon was tested both on a 32-core and a 4-core workstation. Streaming data was read from either a commercial acquisition system (Neuralynx) or the open-source Open Ephys hardware, while closed-loop TTL pulses were generated with a USB module for digital output. We characterized the round-trip latency of our Falcon-based closed-loop system, as well as the specific latency contribution of the software architecture, by testing processing graphs with up to 32 parallel pipelines and eight serial stages. We finally deployed Falcon in a task of real-time detection of population bursts recorded live from the hippocampus of a freely moving rat. On Neuralynx hardware, round-trip latency was well below 1 ms and stable for at least 1 h, while on Open Ephys hardware latencies were below 15 ms. The latency contribution of the software was below 0.5 ms. Round-trip and software latencies were similar on both 32- and 4-core workstations. Falcon was used successfully to detect population bursts online with ~40 ms average latency. Falcon is a novel open-source software for closed-loop neuroscience. It has sub-millisecond intrinsic latency and gives the experimenter direct control of CPU resources. We envisage Falcon to be a useful tool to the neuroscientific community for implementing a wide variety of closed-loop experiments, including those requiring use of complex data structures and real-time execution of computationally intensive algorithms, such as population neural decoding/encoding from large cell assemblies.

  14. Falcon: a highly flexible open-source software for closed-loop neuroscience

    NASA Astrophysics Data System (ADS)

    Ciliberti, Davide; Kloosterman, Fabian

    2017-08-01

    Objective. Closed-loop experiments provide unique insights into brain dynamics and function. To facilitate a wide range of closed-loop experiments, we created an open-source software platform that enables high-performance real-time processing of streaming experimental data. Approach. We wrote Falcon, a C++ multi-threaded software in which the user can load and execute an arbitrary processing graph. Each node of a Falcon graph is mapped to a single thread and nodes communicate with each other through thread-safe buffers. The framework allows for easy implementation of new processing nodes and data types. Falcon was tested both on a 32-core and a 4-core workstation. Streaming data was read from either a commercial acquisition system (Neuralynx) or the open-source Open Ephys hardware, while closed-loop TTL pulses were generated with a USB module for digital output. We characterized the round-trip latency of our Falcon-based closed-loop system, as well as the specific latency contribution of the software architecture, by testing processing graphs with up to 32 parallel pipelines and eight serial stages. We finally deployed Falcon in a task of real-time detection of population bursts recorded live from the hippocampus of a freely moving rat. Main results. On Neuralynx hardware, round-trip latency was well below 1 ms and stable for at least 1 h, while on Open Ephys hardware latencies were below 15 ms. The latency contribution of the software was below 0.5 ms. Round-trip and software latencies were similar on both 32- and 4-core workstations. Falcon was used successfully to detect population bursts online with ~40 ms average latency. Significance. Falcon is a novel open-source software for closed-loop neuroscience. It has sub-millisecond intrinsic latency and gives the experimenter direct control of CPU resources. We envisage Falcon to be a useful tool to the neuroscientific community for implementing a wide variety of closed-loop experiments, including those requiring use of complex data structures and real-time execution of computationally intensive algorithms, such as population neural decoding/encoding from large cell assemblies.

  15. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  16. Mass Digitization of Books

    ERIC Educational Resources Information Center

    Coyle, Karen

    2006-01-01

    Mass digitization of the bound volumes that we generally call "books" has begun, and, thanks to the interest in Google and all that it does, it is getting widespread media attention. The Open Content Alliance (OCA), a library initiative formed after Google announced its library book digitization project, has brought library digitization projects…

  17. Making Connections with Digital Data

    ERIC Educational Resources Information Center

    Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert

    2004-01-01

    State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…

  18. Digital Geologic Map of the Rosalia 1:100,000 Quadrangle, Washington and Idaho: A Digital Database for the 1990 S.Z. Waggoner Map

    USGS Publications Warehouse

    Derkey, Pamela D.; Johnson, Bruce R.; Lackaff, Beatrice B.; Derkey, Robert E.

    1998-01-01

    The geologic map of the Rosalia 1:100,000-scale quadrangle was compiled in 1990 by S.Z. Waggoner of the Washington state Division of Geology and Earth Resources. This data was entered into a geographic information system (GIS) as part of a larger effort to create regional digital geology for the Pacific Northwest. The intent was to provide a digital geospatial database for a previously published black and white paper geologic map. This database can be queried in many ways to produce a variety of geologic maps. Digital base map data files are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000) as it has been somewhat generalized to fit the 1:100,000 scale map. The map area is located in eastern Washington and extends across the state border into western Idaho. This open-file report describes the methods used to convert the geologic map data into a digital format, documents the file structures, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. We wish to thank J. Eric Schuster of the Washington Division of Geology and Earth Resources for providing the original stable-base mylar and the funding for it to be scanned. We also thank Dick Blank and Barry Moring of the U.S. Geological Survey for reviewing the manuscript and digital files, respectively.

  19. Development of a written music-recognition system using Java and open source technologies

    NASA Astrophysics Data System (ADS)

    Loibner, Gernot; Schwarzl, Andreas; Kovač, Matthias; Paulus, Dietmar; Pölzleitner, Wolfgang

    2005-10-01

    We report on the development of a software system to recognize and interpret printed music. The overall goal is to scan printed music sheets, analyze and recognize the notes, timing, and written text, and derive the all necessary information to use the computers MIDI sound system to play the music. This function is primarily useful for musicians who want to digitize printed music for editing purposes. There exist a number of commercial systems that offer such a functionality. However, on testing these systems, we were astonished on how weak they behave in their pattern recognition parts. Although we submitted very clear and rather flawless scanning input, none of these systems was able to e.g. recognize all notes, staff lines, and systems. They all require a high degree of interaction, post-processing, and editing to get a decent digital version of the hard copy material. In this paper we focus on the pattern recognition area. In a first approach we tested more or less standard methods of adaptive thresholding, blob detection, line detection, and corner detection to find the notes, staff lines, and candidate objects subject to OCR. Many of the objects on this type of material can be learned in a training phase. None of the commercial systems we saw offers the option to train special characters or unusual signatures. A second goal in this project is to use a modern software engineering platform. We were interested in how well Java and open source technologies are suitable for pattern recognition and machine vision. The scanning of music served as a case-study.

  20. Goal Setting and Open Digital Badges in Higher Education

    ERIC Educational Resources Information Center

    Cheng, Zui; Watson, Sunnie Lee; Newby, Timothy James

    2018-01-01

    While Open Digital Badges (ODBs) has gained an increasing recognition as micro-credentials, many researchers foresee the role of ODBs as an innovative learning tool to enhance learning experiences beyond that of an alternative credential. However, little research has explored this topic. The purposes of this paper are to 1) argue that one way to…

  1. An Evaluation of the Informedia Digital Video Library System at the Open University.

    ERIC Educational Resources Information Center

    Kukulska-Hulme, Agnes; Van der Zwan, Robert; DiPaolo, Terry; Evers, Vanessa; Clarke, Sarah

    1999-01-01

    Reports on an Open University evaluation study of the Informedia Digital Video Library System developed at Carnegie Mellon University (CMU). Findings indicate that there is definite potential for using the system, provided that certain modifications can be made. Results also confirm findings of the Informedia team at CMU that the content of video…

  2. Open and Anonymous Peer Review in a Digital Online Environment Compared in Academic Writing Context

    ERIC Educational Resources Information Center

    Razi, Salim

    2016-01-01

    This study compares the impact of "open" and "anonymous" peer feedback as an adjunct to teacher-mediated feedback in a digital online environment utilising data gathered on an academic writing course at a Turkish university. Students were divided into two groups with similar writing proficiencies. Students peer reviewed papers…

  3. ToxicDocs (www.ToxicDocs.org): from history buried in stacks of paper to open, searchable archives online.

    PubMed

    Rosner, David; Markowitz, Gerald; Chowkwanyun, Merlin

    2018-02-01

    As a result of a legal mechanism called discovery, the authors accumulated millions of internal corporate and trade association documents related to the introduction of new products and chemicals into workplaces and commerce. What did these private entities discuss among themselves and with their experts? The plethora of documents, both a blessing and a curse, opened new sources and interesting questions about corporate and regulatory histories. But they also posed an almost insurmountable challenge to historians. Thus emerged ToxicDocs, possible only with a technological innovation known as "Big Data." That refers to the sheer volume of new digital data and to the computational power to analyze them. Users will be able to identify what firms knew (or did not know) about the dangers of toxic substances in their products-and when. The database opens many areas to inquiry including environmental studies, business history, government regulation, and public policy. ToxicDocs will remain a resource free and open to all, anywhere in the world.

  4. Using mid-range laser scanners to digitize cultural-heritage sites.

    PubMed

    Spring, Adam P; Peters, Caradoc; Minns, Tom

    2010-01-01

    Here, we explore new, more accessible ways of modeling 3D data sets that both professionals and amateurs can employ in areas such as architecture, forensics, geotechnics, cultural heritage, and even hobbyist modeling. To support our arguments, we present images from a recent case study in digital preservation of cultural heritage using a mid-range laser scanner. Our appreciation of the increasing variety of methods for capturing 3D spatial data inspired our research. Available methods include photogrammetry, airborne lidar, sonar, total stations (a combined electronic and optical survey instrument), and midand close-range scanning.1 They all can produce point clouds of varying density. In our case study, the point cloud produced by a mid-range scanner demonstrates how open source software can make modeling and disseminating data easier. Normally, researchers would model this data using expensive specialized software, and the data wouldn't extend beyond the laser-scanning community.

  5. High-resolution terahertz inline digital holography based on quantum cascade laser

    NASA Astrophysics Data System (ADS)

    Deng, Qinghua; Li, Weihua; Wang, Xuemin; Li, Zeyu; Huang, Haochong; Shen, Changle; Zhan, Zhiqiang; Zou, Ruijiao; Jiang, Tao; Wu, Weidong

    2017-11-01

    A key requirement to put terahertz (THz) imaging systems into applications is high resolution. Based on a self-developed THz quantum cascade laser (QCL), we demonstrate a THz inline digital holography imaging system with high lateral resolution. In our case, the lateral resolution of this holography imaging system is pushed to about 70 μm, which is close to the intrinsic resolution limit of this system. To the best of our knowledge, this is much smaller than what has been reported up to now. This is attributed to a series of improvements, such as shortening the QCL wavelength, increasing Nx and Ny by the synthetic aperture method, smoothing the source beam profile, and diminishing vibration due to the cryorefrigeration device. This kind of holography system with a resolution smaller than 100 μm opens the door for many imaging experiments. It will turn the THz imaging systems into applications.

  6. quanTLC, an online open-source solution for videodensitometric quantification.

    PubMed

    Fichou, Dimitri; Morlock, Gertrud E

    2018-07-27

    The image is the key feature of planar chromatography. Videodensitometry by digital image conversion is the fastest way of its evaluation. Instead of scanning single sample tracks one after the other, only few clicks are needed to convert all tracks at one go. A minimalistic software was newly developed, termed quanTLC, that allowed the quantitative evaluation of samples in few minutes. quanTLC includes important assets such as open-source, online, free of charge, intuitive to use and tailored to planar chromatography, as none of the nine existent software for image evaluation covered these aspects altogether. quanTLC supports common image file formats for chromatogram upload. All necessary steps were included, i.e., videodensitogram extraction, preprocessing, automatic peak integration, calibration, statistical data analysis, reporting and data export. The default options for each step are suitable for most analyses while still being tunable, if needed. A one-minute video was recorded to serve as user manual. The software capabilities are shown on the example of a lipophilic dye mixture separation. The quantitative results were verified by comparison with those obtained by commercial videodensitometry software and opto-mechanical slit-scanning densitometry. The data can be exported at each step to be processed in further software, if required. The code was released open-source to be exploited even further. The software itself is online useable without installation and directly accessible at http://shinyapps.ernaehrung.uni-giessen.de/quanTLC. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Open-Source Radiation Exposure Extraction Engine (RE3) with Patient-Specific Outlier Detection.

    PubMed

    Weisenthal, Samuel J; Folio, Les; Kovacs, William; Seff, Ari; Derderian, Vana; Summers, Ronald M; Yao, Jianhua

    2016-08-01

    We present an open-source, picture archiving and communication system (PACS)-integrated radiation exposure extraction engine (RE3) that provides study-, series-, and slice-specific data for automated monitoring of computed tomography (CT) radiation exposure. RE3 was built using open-source components and seamlessly integrates with the PACS. RE3 calculations of dose length product (DLP) from the Digital imaging and communications in medicine (DICOM) headers showed high agreement (R (2) = 0.99) with the vendor dose pages. For study-specific outlier detection, RE3 constructs robust, automatically updating multivariable regression models to predict DLP in the context of patient gender and age, scan length, water-equivalent diameter (D w), and scanned body volume (SBV). As proof of concept, the model was trained on 811 CT chest, abdomen + pelvis (CAP) exams and 29 outliers were detected. The continuous variables used in the outlier detection model were scan length (R (2)  = 0.45), D w (R (2) = 0.70), SBV (R (2) = 0.80), and age (R (2) = 0.01). The categorical variables were gender (male average 1182.7 ± 26.3 and female 1047.1 ± 26.9 mGy cm) and pediatric status (pediatric average 710.7 ± 73.6 mGy cm and adult 1134.5 ± 19.3 mGy cm).

  8. Scoping Study of Machine Learning Techniques for Visualization and Analysis of Multi-source Data in Nuclear Safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang

    In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less

  9. Direct Synthesis of Carbon Nanotube Field Emitters on Metal Substrate for Open-Type X-ray Source in Medical Imaging.

    PubMed

    Gupta, Amar Prasad; Park, Sangjun; Yeo, Seung Jun; Jung, Jaeik; Cho, Chonggil; Paik, Sang Hyun; Park, Hunkuk; Cho, Young Chul; Kim, Seung Hoon; Shin, Ji Hoon; Ahn, Jeung Sun; Ryu, Jehwang

    2017-07-29

    We report the design, fabrication and characterization of a carbon nanotube enabled open-type X-ray system for medical imaging. We directly grew the carbon nanotubes used as electron emitter for electron gun on a non-polished raw metallic rectangular-rounded substrate with an area of 0.1377 cm² through a plasma enhanced chemical vapor deposition system. The stable field emission properties with triode electrodes after electrical aging treatment showed an anode emission current of 0.63 mA at a gate field of 7.51 V/μm. The 4.5-inch cubic shape open type X-ray system was developed consisting of an X-ray aperture, a vacuum part, an anode high voltage part, and a field emission electron gun including three electrodes with focusing, gate and cathode electrodes. Using this system, we obtained high-resolution X-ray images accelerated at 42-70 kV voltage by digital switching control between emitter and ground electrode.

  10. Direct Synthesis of Carbon Nanotube Field Emitters on Metal Substrate for Open-Type X-ray Source in Medical Imaging

    PubMed Central

    Gupta, Amar Prasad; Park, Sangjun; Yeo, Seung Jun; Jung, Jaeik; Cho, Chonggil; Paik, Sang Hyun; Park, Hunkuk; Cho, Young Chul; Kim, Seung Hoon; Shin, Ji Hoon; Ahn, Jeung Sun; Ryu, Jehwang

    2017-01-01

    We report the design, fabrication and characterization of a carbon nanotube enabled open-type X-ray system for medical imaging. We directly grew the carbon nanotubes used as electron emitter for electron gun on a non-polished raw metallic rectangular-rounded substrate with an area of 0.1377 cm2 through a plasma enhanced chemical vapor deposition system. The stable field emission properties with triode electrodes after electrical aging treatment showed an anode emission current of 0.63 mA at a gate field of 7.51 V/μm. The 4.5-inch cubic shape open type X-ray system was developed consisting of an X-ray aperture, a vacuum part, an anode high voltage part, and a field emission electron gun including three electrodes with focusing, gate and cathode electrodes. Using this system, we obtained high-resolution X-ray images accelerated at 42–70 kV voltage by digital switching control between emitter and ground electrode. PMID:28773237

  11. The role of the Jotello F. Soga Library in the digital preservation of South African veterinary history.

    PubMed

    Breytenbach, Amelia; Lourens, Antoinette; Marsh, Susan

    2013-04-26

    The history of veterinary science in South Africa can only be appreciated, studied, researched and passed on to coming generations if historical sources are readily available. In most countries, material and sources with historical value are often difficult to locate, dispersed over a large area and not part of the conventional book and journal literature. The Faculty of Veterinary Science of the University of Pretoria and its library has access to a large collection of historical sources. The collection consists of photographs, photographic slides, documents, proceedings, posters, audio-visual material, postcards and other memorabilia. Other institutions in the country are also approached if relevant sources are identified in their collections. The University of Pretoria's institutional repository, UPSpace, was launched in 2006. This provided the Jotello F. Soga Library with the opportunity to fill the repository with relevant digitised collections of diverse heritage and learning resources that can contribute to the long-term preservation and accessibility of historical veterinary sources. These collections are available for use not only by historians and researchers in South Africa but also elsewhere in Africa and the rest of the world. Important historical collections such as the Arnold Theiler collection, the Jotello F. Soga collection and collections of the Onderstepoort Journal of Veterinary Research and the Journal of the South African Veterinary Association are highlighted. The benefits of an open access digital repository, the importance of collaboration across the veterinary community and other prerequisites for the sustainability of a digitisation project and the importance of metadata to enhance accessibility are covered.

  12. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less

  13. A Digital View of History: Drawing and Discussing Models of Historical Concepts

    ERIC Educational Resources Information Center

    Manfra, Meghan McGlinn; Coven, Robert M.

    2011-01-01

    Digital history refers to "the study of the past using a variety of electronically reproduced primary source texts, images, and artifacts as well as the constructed narratives, accounts, or presentations that result from digital historical inquiry." Access to digitized primary sources can promote active instruction in historical thinking. A…

  14. Recognising Informal Elearning with Digital Badging: Evidence for a Sustainable Business Model

    ERIC Educational Resources Information Center

    Law, Patrina

    2015-01-01

    Digital badging as a trend in education is now recognised. It offers a way to reward and motivate, providing evidence of skills and achievements. Badged Open Courses (BOCs) were piloted by The Open University (OU) in 2013. The project built on research into the motivations and profiles of learners using free educational resources which the OU…

  15. Photon-Number-Resolving Transition-Edge Sensors for the Metrology of Quantum Light Sources

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; von Helversen, M.; López, M.; Gericke, F.; Schlottmann, E.; Heindel, T.; Kück, S.; Reitzenstein, S.; Beyer, J.

    2018-05-01

    Low-temperature photon-number-resolving detectors allow for direct access to the photon number distribution of quantum light sources and can thus be exploited to explore the photon statistics, e.g., solid-state-based non-classical light sources. In this work, we report on the setup and calibration of a detection system based on fiber-coupled tungsten transition-edge sensors (W-TESs). Our stand-alone system comprises two W-TESs, read out by two 2-stage-SQUID current sensors, operated in a compact detector unit that is integrated in an adiabatic demagnetization refrigerator. Fast low-noise analog amplifiers and digitizers are used for signal acquisition. The detection efficiency of the single-mode fiber-coupled detector system in the spectral region of interest (850-950 nm) is determined to be larger than 87 %. The presented detector system opens up new routes in the characterization of quantum light sources for quantum information, quantum-enhanced sensing and quantum metrology.

  16. Preliminary Geologic Map of the Topanga 7.5' Quadrangle, Southern California: A Digital Database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, R.H.

    1995-01-01

    INTRODUCTION This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1994). More specific information about the units may be available in the original sources. The content and character of the database and methods of obtaining it are described herein. The geologic map database itself, consisting of three ARC coverages and one base layer, can be obtained over the Internet or by magnetic tape copy as described below. The processes of extracting the geologic map database from the tar file, and importing the ARC export coverages (procedure described herein), will result in the creation of an ARC workspace (directory) called 'topnga.' The database was compiled using ARC/INFO version 7.0.3, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). It is stored in uncompressed ARC export format (ARC/INFO version 7.x) in a compressed UNIX tar (tape archive) file. The tar file was compressed with gzip, and may be uncompressed with gzip, which is available free of charge via the Internet from the gzip Home Page (http://w3.teaser.fr/~jlgailly/gzip). A tar utility is required to extract the database from the tar file. This utility is included in most UNIX systems, and can be obtained free of charge via the Internet from Internet Literacy's Common Internet File Formats Webpage http://www.matisse.net/files/formats.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView (version 1.0 for Windows 3.1 to 3.11 is available for free from ESRI's web site: http://www.esri.com). 1. Different base layer - The original digital database included separates clipped out of the Los Angeles 1:100,000 sheet. This release includes a vectorized scan of a scale-stable negative of the Topanga 7.5 minute quadrangle. 2. Map projection - The files in the original release were in polyconic projection. The projection used in this release is state plane, which allows for the tiling of adjacent quadrangles. 3. File compression - The files in the original release were compressed with UNIX compression. The files in this release are compressed with gzip.

  17. Electronic textbooks as a professional resource after dental school.

    PubMed

    Bates, Michael L; Strother, Elizabeth A; Brunet, Darlene P; Gallo, John R

    2012-05-01

    In two previous studies of dental students' attitudes about the VitalSource Bookshelf, a digital library of dental textbooks, students expressed negative opinions about owning and reading electronic textbooks. With the assumption that dentists would find the digital textbooks useful for patient care, the authors surveyed recent graduates to determine if their attitude toward the VitalSource Bookshelf had changed. A brief survey was sent to 119 alumni from the classes of 2009 and 2010 of one U.S. dental school. Forty-seven (39.5 percent) completed the questionnaire. Eighteen respondents (48.3 percent) reported using the e-textbooks often or sometimes. The twenty-nine dentists who said they have not used the collection since graduation reported preferring print books or other online sources or having technical problems when downloading the books to a new computer. Only five respondents selected the VitalSource Bookshelf as a preferred source of professional information. Most of the respondents reported preferring to consult colleagues (37.8 percent), the Internet (20 percent), or hardcopy books (17.8 percent) for information. When asked in an open-ended question to state their opinion of the Bookshelf, nineteen (42.2 percent) responded positively, but almost one-third of these only liked the search feature. Six respondents reported that they never use the program. Twenty-two said they have had technical problems with the Bookshelf, including fifteen who have not been able to install it on a new computer. Many of them said they have not followed up with either the dental school or VitalSource support services to overcome this problem. Our study suggests that dentists, similar to dental students, dislike reading electronic textbooks, even with the advantage of searching a topic across more than sixty dental titles.

  18. Statistical physics of vaccination

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Bauch, Chris T.; Bhattacharyya, Samit; d'Onofrio, Alberto; Manfredi, Piero; Perc, Matjaž; Perra, Nicola; Salathé, Marcel; Zhao, Dawei

    2016-12-01

    Historically, infectious diseases caused considerable damage to human societies, and they continue to do so today. To help reduce their impact, mathematical models of disease transmission have been studied to help understand disease dynamics and inform prevention strategies. Vaccination-one of the most important preventive measures of modern times-is of great interest both theoretically and empirically. And in contrast to traditional approaches, recent research increasingly explores the pivotal implications of individual behavior and heterogeneous contact patterns in populations. Our report reviews the developmental arc of theoretical epidemiology with emphasis on vaccination, as it led from classical models assuming homogeneously mixing (mean-field) populations and ignoring human behavior, to recent models that account for behavioral feedback and/or population spatial/social structure. Many of the methods used originated in statistical physics, such as lattice and network models, and their associated analytical frameworks. Similarly, the feedback loop between vaccinating behavior and disease propagation forms a coupled nonlinear system with analogs in physics. We also review the new paradigm of digital epidemiology, wherein sources of digital data such as online social media are mined for high-resolution information on epidemiologically relevant individual behavior. Armed with the tools and concepts of statistical physics, and further assisted by new sources of digital data, models that capture nonlinear interactions between behavior and disease dynamics offer a novel way of modeling real-world phenomena, and can help improve health outcomes. We conclude the review by discussing open problems in the field and promising directions for future research.

  19. OpenDrop: An Integrated Do-It-Yourself Platform for Personal Use of Biochips

    PubMed Central

    Alistar, Mirela; Gaudenz, Urs

    2017-01-01

    Biochips, or digital labs-on-chip, are developed with the purpose of being used by laboratory technicians or biologists in laboratories or clinics. In this article, we expand this vision with the goal of enabling everyone, regardless of their expertise, to use biochips for their own personal purposes. We developed OpenDrop, an integrated electromicrofluidic platform that allows users to develop and program their own bio-applications. We address the main challenges that users may encounter: accessibility, bio-protocol design and interaction with microfluidics. OpenDrop consists of a do-it-yourself biochip, an automated software tool with visual interface and a detailed technique for at-home operations of microfluidics. We report on two years of use of OpenDrop, released as an open-source platform. Our platform attracted a highly diverse user base with participants originating from maker communities, academia and industry. Our findings show that 47% of attempts to replicate OpenDrop were successful, the main challenge remaining the assembly of the device. In terms of usability, the users managed to operate their platforms at home and are working on designing their own bio-applications. Our work provides a step towards a future in which everyone will be able to create microfluidic devices for their personal applications, thereby democratizing parts of health care. PMID:28952524

  20. Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelepouga, Serguei A; Rue, David M; Saveliev, Alexei V

    2011-03-15

    A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.

  1. Tools, Techniques, and Applications: Normalizing the VR Paradigm

    NASA Technical Reports Server (NTRS)

    Duncan, Gaeme

    2008-01-01

    Oshynee's precision Learning Objective performance factor rubrics with associated behavioral anchors integrates with Thinking Worlds(TradeMark), to provide event data recording and dynamic prescriptive feedback. Thinking Worlds(TradeMark) provides SCORM parametric data for reporting within the game and within overarching curricula or workplace evaluation strategy. - Open-sourced, browser-based digital dashboard reporting tools collect data from TW, LMS, LCMS, HR, and workplace metrics or control systems The games may be delivered across the internet or in a range of networked and stand-alone methods using the delivery model (s) required by the host organization.

  2. Photogrammetric Measurements in Fixed Wing Uav Imagery

    NASA Astrophysics Data System (ADS)

    Gülch, E.

    2012-07-01

    Several flights have been undertaken with PAMS (Photogrammetric Aerial Mapping System) by Germap, Germany, which is briefly introduced. This system is based on the SmartPlane fixed-wing UAV and a CANON IXUS camera system. The plane is equipped with GPS and has an infrared sensor system to estimate attitude values. A software has been developed to link the PAMS output to a standard photogrammetric processing chain built on Trimble INPHO. The linking of the image files and image IDs and the handling of different cases with partly corrupted output have to be solved to generate an INPHO project file. Based on this project file the software packages MATCH-AT, MATCH-T DSM, OrthoMaster and OrthoVista for digital aerial triangulation, DTM/DSM generation and finally digital orthomosaik generation are applied. The focus has been on investigations on how to adapt the "usual" parameters for the digital aerial triangulation and other software to the UAV flight conditions, which are showing high overlaps, large kappa angles and a certain image blur in case of turbulences. It was found, that the selected parameter setup shows a quite stable behaviour and can be applied to other flights. A comparison is made to results from other open source multi-ray matching software to handle the issue of the described flight conditions. Flights over the same area at different times have been compared to each other. The major objective was here to see, on how far differences occur relative to each other, without having access to ground control data, which would have a potential for applications with low requirements on the absolute accuracy. The results show, that there are influences of weather and illumination visible. The "unusual" flight pattern, which shows big time differences for neighbouring strips has an influence on the AT and DTM/DSM generation. The results obtained so far do indicate problems in the stability of the camera calibration. This clearly requests a usage of GCPs for all projects, independent on the application. The effort is estimated to be even higher as expected, as also self-calibration will be an issue to handle a possibly instable camera calibration. To overcome some of the encountered problems with the very specific features of UAV flights a software UAVision was developed based on Open Source libraries to produce input data for bundle adjustment of UAV images by PAMS. The empirical test results show a considerable improvement in the matching of tie points. The results do, however, show that the Open Source bundle adjustment was not applicable to this type of imagery. This still leaves the possibility to use the improved tie point correspondences in the commercial AT package.

  3. SWARM: A Compact High Resolution Correlator and Wideband VLBI Phased Array Upgrade for SMA

    NASA Astrophysics Data System (ADS)

    Weintroub, Jonathan

    2014-06-01

    A new digital back end (DBE) is being commissioned on Mauna Kea. The “SMA Wideband Astronomical ROACH2 Machine”, or SWARM, processes a 4 GHz usable band in single polarization mode and is flexibly reconfigurable for 2 GHz full Stokes dual polarization. The hardware is based on the open source Reconfigurable Open Architecture Computing Hardware 2 (ROACH2) platform from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER). A 5 GSps quad-core analog-to-digital converter board uses a commercial chip from e2v installed on a CASPER-standard printed circuit board designed by Homin Jiang’s group at ASIAA. Two ADC channels are provided per ROACH2, each sampling a 2.3 GHz Nyquist band generated by a custom wideband block downconverter (BDC). The ROACH2 logic includes 16k-channel Polyphase Filterbank (F-engine) per input followed by a 10 GbE switch based corner-turn which feeds into correlator-accumulator logic (X-engines) co-located with the F-engines. This arrangement makes very effective use of a small amount of digital hardware (just 8 ROACH2s in 1U rack mount enclosures). The primary challenge now is to meet timing at full speed for a large and very complex FPGA bit code. Design of the VLBI phased sum and recorder interface logic is also in process. Our poster will describe the instrument design, with the focus on the particular challenges of ultra wideband signal processing. Early connected commissioning and science verification data will be presented.

  4. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  5. Digital Histories for the Digital Age: Collaborative Writing in Large Lecture Courses

    ERIC Educational Resources Information Center

    Soh, Leen-Kiat; Khandaker, Nobel; Thomas, William G.

    2013-01-01

    The digital environment has had an immense effect on American society, learning, and education: we have more sources available at our fingertips than any previous generation. Teaching and learning with these new sources, however, has been a challenging transition. Students are confronted with an ocean of digital objects and need skills to navigate…

  6. Digital curation and online resources: digital scanning of surgical tools at the royal college of physicians and surgeons of Glasgow for an open university learning resource.

    PubMed

    Earley, Kirsty; Livingstone, Daniel; Rea, Paul M

    2017-01-01

    Collection preservation is essential for the cultural status of any city. However, presenting a collection publicly risks damage. Recently this drawback has been overcome by digital curation. Described here is a method of digitisation using photogrammetry and virtual reality software. Items were selected from the Royal College of Physicians and Surgeons of Glasgow archives, and implemented into an online learning module for the Open University. Images were processed via Agisoft Photoscan, Autodesk Memento, and Garden Gnome Object 2VR. Although problems arose due to specularity, 2VR digital models were developed for online viewing. Future research must minimise the difficulty of digitising specular objects.

  7. A digital version of the 1970 U.S. Geological Survey topographic map of the San Francisco Bay region, three sheets, 1:125,000

    USGS Publications Warehouse

    Aitken, Douglas S.

    1997-01-01

    This Open-File report is a digital topographic map database. It contains a digital version of the 1970 U.S. Geological Survey topographic map of the San Francisco Bay Region (3 sheets), at a scale of 1:125,000. These ARC/INFO coverages are in vector format. The vectorization process has distorted characters representing letters and numbers, as well as some road and other symbols, making them difficult to read in some instances. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The content and character of the database and methods of obtaining it are described herein.

  8. Open Access to Research Articles and Data: Library Roles

    NASA Astrophysics Data System (ADS)

    Joseph, Heather

    2015-08-01

    Over the past decade, a handful of key developments have caused scholars and researchers to rethink not only the way they conduct their work, but also the way in which they communicate it to others. The advent of the Internet has provided unprecedented opportunities for immediate, cost effective global connectivity, opening up new possibilities for collaboration and communication. This has resulted in scholarship increasingly being conducted in the online environment, and a vast amount of new digital information being generated and made widely available to those interested in using it. Additionally, the Internet is a dynamic environment, with new channels for producing and sharing information in a myriad of formats emerging frequently.In higher education, the momentum of the burgeoning movement towards "open" sharing of information of all kinds continues to gain traction. In particular, advancements in the areas of opening up access to articles and reserch data are increasingly visible. In both of these areas, academic and research libraries are playing important, central roles in promoting the awareness of the potential costs and benefits of a more open research environment, as well as defining new roles for libraries in this digital environment.As this push for grater openness continues, these fronts are intersecting in interesting and potentially transformative ways. The Open Access and Open Data movements share fundamental philosophical commonalities that make collaboration a natural outcome. Both movements place a premium on reducing barriers to discovering and accessing pertinent digital information. Perhaps even more significantly, both explicitly recognize that enabling productive use of digital information is key to unlocking its full value. As a result of these shared priorities, there are a wide variety of common strategies that libraries can take to help advance research, presenting new opportunities for deeper collaboration to take place.This talk will explore what is happening in these "open" movements from both a practical and policy standpoint; how this might directly impact academia, the research community, and especially, libraries.

  9. Realizing the increased potential of an open-system high-definition digital projector design

    NASA Astrophysics Data System (ADS)

    Daniels, Reginald

    1999-05-01

    Modern video projectors are becoming more compact and capable. Various display technologies are very competitive and are delivering higher performance and more compact projectors to market at an ever quickening pace. However the end users are often left with the daunting task of integrating the 'off the self projectors' into a previously existing system. As the projectors become more digitally enhanced, there will be a series of designs, and the digital projector technology matures. The design solutions will be restricted by the state of the art at the time of manufacturing. In order to allow the most growth and performance for a given price, many design decisions will be made and revisited over a period of years or decades. A modular open digital system design concept is indeed a major challenge of the future high definition digital displays for al applications.

  10. Digital questionnaire platform in the Danish Blood Donor Study.

    PubMed

    Burgdorf, K S; Felsted, N; Mikkelsen, S; Nielsen, M H; Thørner, L W; Pedersen, O B; Sørensen, E; Nielsen, K R; Bruun, M T; Werge, T; Erikstrup, C; Hansen, T; Ullum, H

    2016-10-01

    The Danish Blood Donor Study (DBDS) is a prospective, population-based study and biobank. Since 2010, 100,000 Danish blood donors have been included in the study. Prior to July 2015 all participating donors had to complete a paper-based questionnaire. Here we describe the establishment of a digital tablet-based questionnaire platform implemented in blood bank sites across Denmark. The digital questionnaire was developed using the open source survey software tool LimeSurvey. The participants accesses the questionnaire online with a standard SSL encrypted HTTP connection using their personal civil registration numbers. The questionnaire is placed at a front-end web server and a collection server retrieves the completed questionnaires. Data from blood samples, register data, genetic data and verification of signed informed consent are then transferred to and merged with the questionnaire data in the DBDS database. The digital platform enables personalized questionnaires, presenting only questions relevant to the specific donor by hiding unneeded follow-up questions on screening question results. New versions of questionnaires are immediately available at all blood collection facilities when new projects are initiated. The digital platform is a faster, cost-effective and more flexible solution to collect valid data from participating donors compared to paper-based questionnaires. The overall system can be used around the world by the use of Internet connection, but the level of security depends on the sensitivity of the data to be collected. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. The production of digital and printed resources from multiple modalities using visualization and three-dimensional printing techniques.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Chen, Shi; Pan, Zhouxian; Deng, Qingqiong; Yao, Yong; Pan, Hui; He, Taiping; Wang, Xingce

    2017-01-01

    Virtual digital resources and printed models have become indispensable tools for medical training and surgical planning. Nevertheless, printed models of soft tissue organs are still challenging to reproduce. This study adopts open source packages and a low-cost desktop 3D printer to convert multiple modalities of medical images to digital resources (volume rendering images and digital models) and lifelike printed models, which are useful to enhance our understanding of the geometric structure and complex spatial nature of anatomical organs. Neuroimaging technologies such as CT, CTA, MRI, and TOF-MRA collect serial medical images. The procedures for producing digital resources can be divided into volume rendering and medical image reconstruction. To verify the accuracy of reconstruction, this study presents qualitative and quantitative assessments. Subsequently, digital models are archived as stereolithography format files and imported to the bundled software of the 3D printer. The printed models are produced using polylactide filament materials. We have successfully converted multiple modalities of medical images to digital resources and printed models for both hard organs (cranial base and tooth) and soft tissue organs (brain, blood vessels of the brain, the heart chambers and vessel lumen, and pituitary tumor). Multiple digital resources and printed models were provided to illustrate the anatomical relationship between organs and complicated surrounding structures. Three-dimensional printing (3DP) is a powerful tool to produce lifelike and tangible models. We present an available and cost-effective method for producing both digital resources and printed models. The choice of modality in medical images and the processing approach is important when reproducing soft tissue organs models. The accuracy of the printed model is determined by the quality of organ models and 3DP. With the ongoing improvement of printing techniques and the variety of materials available, 3DP will become an indispensable tool in medical training and surgical planning.

  12. Reproducible Operating Margins on a 72800-Device Digital Superconducting Chip (Open Access)

    DTIC Science & Technology

    2015-10-28

    superconductor digital logic. Keywords: flux trapping, yield, digital Superconductor digital technology offers fundamental advantages over conventional...trapping in the superconductor films can degrade or preclude correct circuit operation. Scaling superconductor technology is now possible due to recent...advances in circuit design embodied in reciprocal quantum logic (RQL) [2, 3] and recent advances in superconductor integrated circuit fabrication, which

  13. Construction of a Digital Learning Environment Based on Cloud Computing

    ERIC Educational Resources Information Center

    Ding, Jihong; Xiong, Caiping; Liu, Huazhong

    2015-01-01

    Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…

  14. Outcasts on the Inside: Academics Reinventing Themselves Online

    ERIC Educational Resources Information Center

    Costa, Cristina

    2015-01-01

    Recent developments in digital scholarship point out that academic practices supported by technologies may not only be transformed through the obvious process of digitization, but also renovated through distributed knowledge networks that digital technologies enable, and the practices of openness that such networks develop. Yet, this apparent…

  15. 78 FR 43882 - Sunshine Act Meeting; Open Commission Meeting; Friday, July 19, 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-22

    ... the delivery of video programming. 2 TITLE: Presentation on LEAD Recommendations and Digital Learning... Five Point Blueprint recommending a national initiative to expand digital learning in K-12 education... teachers at Kenmore are using digital technologies and broadband connectivity to expand learning...

  16. Evaluation of using digital gravity field models for zoning map creation

    NASA Astrophysics Data System (ADS)

    Loginov, Dmitry

    2018-05-01

    At the present time the digital cartographic models of geophysical fields are taking a special significance into geo-physical mapping. One of the important directions to their application is the creation of zoning maps, which allow taking into account the morphology of geophysical field in the implementation automated choice of contour intervals. The purpose of this work is the comparative evaluation of various digital models in the creation of integrated gravity field zoning map. For comparison were chosen the digital model of gravity field of Russia, created by the analog map with scale of 1 : 2 500 000, and the open global model of gravity field of the Earth - WGM2012. As a result of experimental works the four integrated gravity field zoning maps were obtained with using raw and processed data on each gravity field model. The study demonstrates the possibility of open data use to create integrated zoning maps with the condition to eliminate noise component of model by processing in specialized software systems. In this case, for solving problem of contour intervals automated choice the open digital models aren't inferior to regional models of gravity field, created for individual countries. This fact allows asserting about universality and independence of integrated zoning maps creation regardless of detail of a digital cartographic model of geo-physical fields.

  17. Low-cost motility tracking system (LOCOMOTIS) for time-lapse microscopy applications and cell visualisation.

    PubMed

    Lynch, Adam E; Triajianto, Junian; Routledge, Edwin

    2014-01-01

    Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81 ± 0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17 ± 0.004 (MDA-MB-231 breast cancer cells), 1.24 ± 0.006 (SC5 mouse Sertoli cells) and 2.21 ± 0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers.

  18. Low-Cost Motility Tracking System (LOCOMOTIS) for Time-Lapse Microscopy Applications and Cell Visualisation

    PubMed Central

    Lynch, Adam E.; Triajianto, Junian; Routledge, Edwin

    2014-01-01

    Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81±0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17±0.004 (MDA-MB-231 breast cancer cells), 1.24±0.006 (SC5 mouse Sertoli cells) and 2.21±0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers. PMID:25121722

  19. An automated, open-source (NASA Ames Stereo Pipeline) workflow for mass production of high-resolution DEMs from commercial stereo satellite imagery: Application to mountain glacies in the contiguous US

    NASA Astrophysics Data System (ADS)

    Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.

    2016-12-01

    We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5­ m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.

  20. Feeding Experimentation Device (FED): A flexible open-source device for measuring feeding behavior.

    PubMed

    Nguyen, Katrina P; O'Neal, Timothy J; Bolonduro, Olurotimi A; White, Elecia; Kravitz, Alexxai V

    2016-07-15

    Measuring food intake in rodents is a conceptually simple yet labor-intensive and temporally-imprecise task. Most commonly, food is weighed manually, with an interval of hours or days between measurements. Commercial feeding monitors are excellent, but are costly and require specialized caging and equipment. We have developed the Feeding Experimentation Device (FED): a low-cost, open-source, home cage-compatible feeding system. FED utilizes an Arduino microcontroller and open-source software and hardware. FED dispenses a single food pellet into a food well where it is monitored by an infrared beam. When the mouse removes the pellet, FED logs the timestamp to a secure digital (SD) card and dispenses a new pellet into the well. Post-hoc analyses of pellet retrieval timestamps reveal high-resolution details about feeding behavior. FED is capable of accurately measuring food intake, identifying discrete trends during light and dark-cycle feeding. Additionally, we show the utility of FED for measuring increases in feeding resulting from optogenetic stimulation of agouti-related peptide neurons in the arcuate nucleus of the hypothalamus. With a cost of ∼$350 per device, FED is >10× cheaper than commercially available feeding systems. FED is also self-contained, battery powered, and designed to be placed in standard colony rack cages, allowing for monitoring of true home cage feeding behavior. Moreover, FED is highly adaptable and can be synchronized with emerging techniques in neuroscience, such as optogenetics, as we demonstrate here. FED allows for accurate, precise monitoring of feeding behavior in a home cage setting. Published by Elsevier B.V.

  1. 76 FR 78866 - Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-20

    ... initiated a rulemaking proceeding in accordance with provisions added by the Digital Millennium Copyright... available in digital copies. Proponent: The Open Book Alliance. 2. Literary works, distributed electronically, that: (1) Contain digital rights management and/or other access controls which either prevent the...

  2. 76 FR 9378 - Meeting of National Council on the Humanities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... Meetings (Open to the Public) Policy Discussion 9-10:30 a.m. Digital Humanities--Room 402 Education... Programs Before the Council 10:30 a.m. until Adjourned Digital Humanities--Room 402 Education Programs... and General Matters a. Digital Humanities b. Education Programs c. Federal/State Partnership d...

  3. Course Correction: Executive Summary

    ERIC Educational Resources Information Center

    Allen, Nicole

    2009-01-01

    Textbooks are an essential, but increasingly expensive part of obtaining a college degree. Digital textbooks are a promising way to lower costs for students. The digital format has the potential to cut production costs, increase options for students, and open up the market to more competition. Digital textbooks are now beginning to gain a more…

  4. Testing RISKGIS Platform with Students to Improve Learning and Teaching Skills

    NASA Astrophysics Data System (ADS)

    Olyazadeh, R.; Aye, Z. C.; Jaboyedoff, M.; Derron, M. H.

    2016-12-01

    Nowadays, open-source developments in the field of natural hazards and risk management increase rapidly. The governments, NGOs and other research institutes are producing data for risk and disaster analysis, but a few platforms are available to bring a real-life experience to the students. This work focuses on the preliminary results of testing a WebGIS platform called RISKGIS with the bachelor students at the University of Lausanne. The platform is designed based on a geospatial open-source technology called OpenGeo (Boundless). This platform can calculate the potential risk of the buildings and assist the students to understand the situations for risk reduction mitigation and decision-making. The center of Jomsom in Nepal was selected for the first exercise that may be affected by amplifications of earthquake. The shaking intensity map was designed by an expert based on the geological characteristics and DEM (Digital Elevation Model) of the area. All buildings data were extracted from OpenStreetMap using QGIS and adapted to the platform. The video tutorial was prepared to guide the students through the platform, and 80 students have tested the application online successfully and 40 of them participated in Moodle (a free Open Source software package for educators) for online feedback and quiz. Among those, 30 of them have completely answered to both. We had interesting results for effectiveness, efficiency and satisfaction based on System Usability Scale (SUS). The SUS score for this platform was 68.6 out of 100. The average result of the quiz was 9.39 out of 10 with a duration of 8 to 33 minutes to answer the quiz. There were several outliers for this duration with 2 minutes (two students) and 9 to 18 hours (three students). Further exercises will be carried out with students by adding more advanced functions to the platform and improving the willingness of participation in this online learning platform. This project is funded by Fonds d'innovation pédagogique de l'Université de Lausanne (FIP). We think this initial, ongoing platform can help both students and teachers to improve their skills in the field of risk and disaster management.Keywords: Risk and disaster Management, GIS, Open-Source, Boundless, Moodle, Teaching and learning

  5. The Development of Wireless Body Area Network for Motion Sensing Application

    NASA Astrophysics Data System (ADS)

    Puspitaningayu, P.; Widodo, A.; Yundra, E.; Ramadhany, F.; Arianto, L.; Habibie, D.

    2018-04-01

    The information era has driven the society into the digitally-controlled lifestyle. Wireless body area networks (WBAN) as the specific scope of wireless sensor networks (WSN) is consistently growing into bigger applications. Currently, people are able to monitor their medical parameters by simply using small electronics devices attached to their body and connected to the authorities. On top of that, this time, smart phones are typically equipped with sensors such as accelerometer, gyroscope, barometric pressure, heart rate monitor, etc. It means that the sensing yet the signal processing can be performed by a single device. Moreover, Android opens lot wider opportunities for new applications as the most popular open-sourced smart phone platform. This paper is intended to show the development of motion sensing application which focused on analysing data from accelerometer and gyroscope. Beside reads the sensors, this application also has the ability to convert the sensors’ numerical value into graphs.

  6. pvsR: An Open Source Interface to Big Data on the American Political Sphere.

    PubMed

    Matter, Ulrich; Stutzer, Alois

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS' data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS' new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases.

  7. Uncovering Information Hidden in Web Archives: Glimpse at Web Analysis Building on Data Warehouses; Towards Continuous Web Archiving: First Results and an Agenda for the Future; The Open Video Digital Library; After Migration to an Electronic Journal Collection: Impact on Faculty and Doctoral Students; Who Is Reading On-Line Education Journals? Why? And What Are They Reading?; Report on eLibrary@UBC4: Research, Collaboration and the Digital Library - Visions for 2010.

    ERIC Educational Resources Information Center

    Rauber, Andreas; Bruckner, Robert M.; Aschenbrenner, Andreas; Witvoet, Oliver; Kaiser, Max; Masanes, Julien; Marchionini, Gary; Geisler, Gary; King, Donald W.; Montgomery, Carol Hansen; Rudner, Lawrence M.; Gellmann, Jennifer S.; Miller-Whitehead, Marie; Iverson, Lee

    2002-01-01

    These six articles discuss Web archives and Web analysis building on data warehouses; international efforts at continuous Web archiving; the Open Video Digital Library; electronic journal collections in academic libraries; online education journals; and an electronic library symposium at the University of British Columbia. (LRW)

  8. The Limitations of Access Alone: Moving Towards Open Processes in Education Technology

    ERIC Educational Resources Information Center

    Knox, Jeremy

    2013-01-01

    "Openness" has emerged as one of the foremost themes in education, within which an open education movement has enthusiastically embraced digital technologies as the central means of participation and inclusion. Open Educational Resources (OERs) and Massive Open Online Courses (MOOCs) have surfaced at the forefront of this development,…

  9. Data management strategies for multinational large-scale systems biology projects.

    PubMed

    Wruck, Wasco; Peuker, Martin; Regenbrecht, Christian R A

    2014-01-01

    Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don't Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.

  10. Data management strategies for multinational large-scale systems biology projects

    PubMed Central

    Peuker, Martin; Regenbrecht, Christian R.A.

    2014-01-01

    Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don’t Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects. PMID:23047157

  11. Documenting Student Connectivity and Use of Digital Annotation Devices in Virginia Commonwealth University Connected Courses: An Assessment Toolkit for Digital Pedagogies in Higher Education

    ERIC Educational Resources Information Center

    Gogia, Laura Park

    2016-01-01

    Virginia Commonwealth University (VCU) is implementing a large scale exploration of digital pedagogies, including connected learning and open education, in an effort to promote digital fluency and integrative thinking among students. The purpose of this study was to develop a classroom assessment toolkit for faculty who wish to document student…

  12. Distributed Interoperable Metadata Registry; How Do Physicists Use an E-Print Archive? Implications for Institutional E-Print Services; A Framework for Building Open Digital Libraries; Implementing Digital Sanborn Maps for Ohio: OhioLINK and OPLIN Collaborative Project.

    ERIC Educational Resources Information Center

    Blanchi, Christophe; Petrone, Jason; Pinfield, Stephen; Suleman, Hussein; Fox, Edward A.; Bauer, Charly; Roddy, Carol Lynn

    2001-01-01

    Includes four articles that discuss a distributed architecture for managing metadata that promotes interoperability between digital libraries; the use of electronic print (e-print) by physicists; the development of digital libraries; and a collaborative project between two library consortia in Ohio to provide digital versions of Sanborn Fire…

  13. Semi-Automated Digital Image Analysis of Pick’s Disease and TDP-43 Proteinopathy

    PubMed Central

    Irwin, David J.; Byrne, Matthew D.; McMillan, Corey T.; Cooper, Felicia; Arnold, Steven E.; Lee, Edward B.; Van Deerlin, Vivianna M.; Xie, Sharon X.; Lee, Virginia M.-Y.; Grossman, Murray; Trojanowski, John Q.

    2015-01-01

    Digital image analysis of histology sections provides reliable, high-throughput methods for neuropathological studies but data is scant in frontotemporal lobar degeneration (FTLD), which has an added challenge of study due to morphologically diverse pathologies. Here, we describe a novel method of semi-automated digital image analysis in FTLD subtypes including: Pick’s disease (PiD, n=11) with tau-positive intracellular inclusions and neuropil threads, and TDP-43 pathology type C (FTLD-TDPC, n=10), defined by TDP-43-positive aggregates predominantly in large dystrophic neurites. To do this, we examined three FTLD-associated cortical regions: mid-frontal gyrus (MFG), superior temporal gyrus (STG) and anterior cingulate gyrus (ACG) by immunohistochemistry. We used a color deconvolution process to isolate signal from the chromogen and applied both object detection and intensity thresholding algorithms to quantify pathological burden. We found object-detection algorithms had good agreement with gold-standard manual quantification of tau- and TDP-43-positive inclusions. Our sampling method was reliable across three separate investigators and we obtained similar results in a pilot analysis using open-source software. Regional comparisons using these algorithms finds differences in regional anatomic disease burden between PiD and FTLD-TDP not detected using traditional ordinal scale data, suggesting digital image analysis is a powerful tool for clinicopathological studies in morphologically diverse FTLD syndromes. PMID:26538548

  14. Semi-Automated Digital Image Analysis of Pick's Disease and TDP-43 Proteinopathy.

    PubMed

    Irwin, David J; Byrne, Matthew D; McMillan, Corey T; Cooper, Felicia; Arnold, Steven E; Lee, Edward B; Van Deerlin, Vivianna M; Xie, Sharon X; Lee, Virginia M-Y; Grossman, Murray; Trojanowski, John Q

    2016-01-01

    Digital image analysis of histology sections provides reliable, high-throughput methods for neuropathological studies but data is scant in frontotemporal lobar degeneration (FTLD), which has an added challenge of study due to morphologically diverse pathologies. Here, we describe a novel method of semi-automated digital image analysis in FTLD subtypes including: Pick's disease (PiD, n=11) with tau-positive intracellular inclusions and neuropil threads, and TDP-43 pathology type C (FTLD-TDPC, n=10), defined by TDP-43-positive aggregates predominantly in large dystrophic neurites. To do this, we examined three FTLD-associated cortical regions: mid-frontal gyrus (MFG), superior temporal gyrus (STG) and anterior cingulate gyrus (ACG) by immunohistochemistry. We used a color deconvolution process to isolate signal from the chromogen and applied both object detection and intensity thresholding algorithms to quantify pathological burden. We found object-detection algorithms had good agreement with gold-standard manual quantification of tau- and TDP-43-positive inclusions. Our sampling method was reliable across three separate investigators and we obtained similar results in a pilot analysis using open-source software. Regional comparisons using these algorithms finds differences in regional anatomic disease burden between PiD and FTLD-TDP not detected using traditional ordinal scale data, suggesting digital image analysis is a powerful tool for clinicopathological studies in morphologically diverse FTLD syndromes. © The Author(s) 2015.

  15. ESDORA: A Data Archive Infrastructure Using Digital Object Model and Open Source Frameworks

    NASA Astrophysics Data System (ADS)

    Shrestha, Biva; Pan, Jerry; Green, Jim; Palanisamy, Giriprakash; Wei, Yaxing; Lenhardt, W.; Cook, R. Bob; Wilson, B. E.; Leggott, M.

    2011-12-01

    There are an array of challenges associated with preserving, managing, and using contemporary scientific data. Large volume, multiple formats and data services, and the lack of a coherent mechanism for metadata/data management are some of the common issues across data centers. It is often difficult to preserve the data history and lineage information, along with other descriptive metadata, hindering the true science value for the archived data products. In this project, we use digital object abstraction architecture as the information/knowledge framework to address these challenges. We have used the following open-source frameworks: Fedora-Commons Repository, Drupal Content Management System, Islandora (Drupal Module) and Apache Solr Search Engine. The system is an active archive infrastructure for Earth Science data resources, which include ingestion, archiving, distribution, and discovery functionalities. We use an ingestion workflow to ingest the data and metadata, where many different aspects of data descriptions (including structured and non-structured metadata) are reviewed. The data and metadata are published after reviewing multiple times. They are staged during the reviewing phase. Each digital object is encoded in XML for long-term preservation of the content and relations among the digital items. The software architecture provides a flexible, modularized framework for adding pluggable user-oriented functionality. Solr is used to enable word search as well as faceted search. A home grown spatial search module is plugged in to allow user to make a spatial selection in a map view. A RDF semantic store within the Fedora-Commons Repository is used for storing information on data lineage, dissemination services, and text-based metadata. We use the semantic notion "isViewerFor" to register internally or externally referenced URLs, which are rendered within the same web browser when possible. With appropriate mapping of content into digital objects, many different data descriptions, including structured metadata, data history, auditing trails, are captured and coupled with the data content. The semantic store provides a foundation for possible further utilizations, including provide full-fledged Earth Science ontology for data interpretation or lineage tracking. Datasets from the NASA-sponsored Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) as well as from the Synthesis Thematic Data Center (MAST-DC) are used in a testing deployment with the system. The testing deployment allows us to validate the features and values described here for the integrated system, which will be presented here. Overall, we believe that the integrated system is valid, reusable data archive software that provides digital stewardship for Earth Sciences data content, now and in the future. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2] Devarakonda, Ranjeet, et al. "Semantic search integration to climate data." Collaboration Technologies and Systems (CTS), 2014 International Conference on. IEEE, 2014.

  16. Dem Generation from Close-Range Photogrammetry Using Extended Python Photogrammetry Toolbox

    NASA Astrophysics Data System (ADS)

    Belmonte, A. A.; Biong, M. M. P.; Macatulad, E. G.

    2017-10-01

    Digital elevation models (DEMs) are widely used raster data for different applications concerning terrain, such as for flood modelling, viewshed analysis, mining, land development, engineering design projects, to name a few. DEMs can be obtained through various methods, including topographic survey, LiDAR or photogrammetry, and internet sources. Terrestrial close-range photogrammetry is one of the alternative methods to produce DEMs through the processing of images using photogrammetry software. There are already powerful photogrammetry software that are commercially-available and can produce high-accuracy DEMs. However, this entails corresponding cost. Although, some of these software have free or demo trials, these trials have limits in their usable features and usage time. One alternative is the use of free and open-source software (FOSS), such as the Python Photogrammetry Toolbox (PPT), which provides an interface for performing photogrammetric processes implemented through python script. For relatively small areas such as in mining or construction excavation, a relatively inexpensive, fast and accurate method would be advantageous. In this study, PPT was used to generate 3D point cloud data from images of an open pit excavation. The PPT was extended to add an algorithm converting the generated point cloud data into a usable DEM.

  17. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  18. PsyToolkit: a software package for programming psychological experiments using Linux.

    PubMed

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  19. An Open Source Software and Web-GIS Based Platform for Airborne SAR Remote Sensing Data Management, Distribution and Sharing

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu

    2014-03-01

    With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.

  20. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  1. Launching GUPPI: the Green Bank Ultimate Pulsar Processing Instrument

    NASA Astrophysics Data System (ADS)

    DuPlain, Ron; Ransom, Scott; Demorest, Paul; Brandt, Patrick; Ford, John; Shelton, Amy L.

    2008-08-01

    The National Radio Astronomy Observatory (NRAO) is launching the Green Bank Ultimate Pulsar Processing Instrument (GUPPI), a prototype flexible digital signal processor designed for pulsar observations with the Robert C. Byrd Green Bank Telescope (GBT). GUPPI uses field programmable gate array (FPGA) hardware and design tools developed by the Center for Astronomy Signal Processing and Electronics Research (CASPER) at the University of California, Berkeley. The NRAO has been concurrently developing GUPPI software and hardware using minimal software resources. The software handles instrument monitor and control, data acquisition, and hardware interfacing. GUPPI is currently an expert-only spectrometer, but supports future integration with the full GBT production system. The NRAO was able to take advantage of the unique flexibility of the CASPER FPGA hardware platform, develop hardware and software in parallel, and build a suite of software tools for monitoring, controlling, and acquiring data with a new instrument over a short timeline of just a few months. The NRAO interacts regularly with CASPER and its users, and GUPPI stands as an example of what reconfigurable computing and open-source development can do for radio astronomy. GUPPI is modular for portability, and the NRAO provides the results of development as an open-source resource.

  2. Implementation of a departmental picture archiving and communication system: a productivity and cost analysis.

    PubMed

    Macyszyn, Luke; Lega, Brad; Bohman, Leif-Erik; Latefi, Ahmad; Smith, Michelle J; Malhotra, Neil R; Welch, William; Grady, Sean M

    2013-09-01

    Digital radiology enhances productivity and results in long-term cost savings. However, the viewing, storage, and sharing of outside imaging studies on compact discs at ambulatory offices and hospitals pose a number of unique challenges to a surgeon's efficiency and clinical workflow. To improve the efficiency and clinical workflow of an academic neurosurgical practice when evaluating patients with outside radiological studies. Open-source software and commercial hardware were used to design and implement a departmental picture archiving and communications system (PACS). The implementation of a departmental PACS system significantly improved productivity and enhanced collaboration in a variety of clinical settings. Using published data on the rate of information technology problems associated with outside studies on compact discs, this system produced a cost savings ranging from $6250 to $33600 and from $43200 to $72000 for 2 cohorts, urgent transfer and spine clinic patients, respectively, therefore justifying the costs of the system in less than a year. The implementation of a departmental PACS system using open-source software is straightforward and cost-effective and results in significant gains in surgeon productivity when evaluating patients with outside imaging studies.

  3. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  4. Colloquium: Digital Technologies--Help or Hindrance for the Humanities?

    ERIC Educational Resources Information Center

    Barker, Elton; Bissell, Chris; Hardwick, Lorna; Jones, Allan; Ridge, Mia; Wolffe, John

    2012-01-01

    This article offers reflections arising from a recent colloquium at the Open University on the implications of the development of digital humanities for research in arts disciplines, and also for their interactions with computing and technology. Particular issues explored include the ways in which the digital turn in humanities research is also a…

  5. Learner-Controlled Scaffolding Linked to Open-Ended Problems in a Digital Learning Environment

    ERIC Educational Resources Information Center

    Edson, Alden Jack

    2017-01-01

    This exploratory study reports on how students activated learner-controlled scaffolding and navigated through sequences of connected problems in a digital learning environment. A design experiment was completed to (re)design, iteratively develop, test, and evaluate a digital version of an instructional unit focusing on binomial distributions and…

  6. The Digital Learning Transition MOOC for Educators: Exploring a Scalable Approach to Professional Development

    ERIC Educational Resources Information Center

    Kleiman, Glenn M.; Wolf, Mary Ann; Frye, David

    2013-01-01

    In conjunction with the relaunch of the Digital Learning Transition (DLT) Massive Open Online Course for Educatos (MOOC-Ed) in September 2013, the Alliance and the Friday Institute released "The Digital Learning Transition MOOC for Educators: Exploring a Scalable Approach to Professional Development", a new paper that describes the…

  7. A Picture is Worth a Thousand Words

    ERIC Educational Resources Information Center

    Davison, Sarah

    2009-01-01

    Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…

  8. Brain-Based Teaching in the Digital Age

    ERIC Educational Resources Information Center

    Sprenger, Marilee

    2010-01-01

    In the digital age, your students have the ways, means, and speed to gather any information they want. But they need your guidance more than ever. Discover how digital technology is actually changing your students' brains. Learn why this creates new obstacles for teachers, but also opens up potential new pathways for learning. You will understand…

  9. Rethinking Workplace Learning in the Digital World: A Case Study of Open Badges

    ERIC Educational Resources Information Center

    Eaglen Bertrando, Sharen Linn

    2017-01-01

    The purpose of this collective case study was to explore digital badging in educational institutions as support for K-12 practitioners struggling to integrate technology into pedagogical practices. The researcher conducted a mixed-method study that captured perceptions about digital badges and follow-up interviews with selected badge users to…

  10. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  11. The use of social media and open data in promoting civic co-management: case of Jakarta

    NASA Astrophysics Data System (ADS)

    Widyanarko, Pritta A.

    2018-05-01

    Abstract. With the high number of population and high use of social media, residents willingly share information in the digital world. While cities are sometimes seen as data-scarce, this digital platform produces informal and scattered, but also valuable data. One way to prepare for co-management during disaster situations is to extend these informal networks to include a channel between residents and government agencies. The platform PetaBencana.id crowd-sources these actual and on-ground observations from residents on social media and instant messaging, integrates the informal and formal disaster-related-data, gives the residents access to the same tool used by the government, and provides an interface that answers to residents and government’s needs; thus making the information more useful in co-managing the city during disaster situation. More information-based decisions can be made by both the residents and government through improved situational knowledge, resulting in better disaster response and resilience of the city.

  12. Digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Barkley, Solomon; Dimiduk, Thomas; Manoharan, Vinothan

    Digital holographic microscopy is a 3D optical imaging technique with high temporal ( ms) and spatial ( 10 nm) precision. However, its adoption as a characterization technique has been limited due to the inherent difficulty of recovering 3D data from the holograms. Successful analysis has traditionally required substantial knowledge about the sample being imaged (for example, the approximate positions of particles in the field of view), as well as expertise in scattering theory. To overcome the obstacles to widespread adoption of holographic microscopy, we developed HoloPy - an open source python package for analysis of holograms and scattering data. HoloPy uses Bayesian statistical methods to determine the geometry and properties of discrete scatterers from raw holograms. We demonstrate the use of HoloPy to measure the dynamics of colloidal particles at interfaces, to ascertain the structures of self-assembled colloidal particles, and to track freely swimming bacteria. The HoloPy codebase is thoroughly tested and well-documented to facilitate use by the broader experimental community. This research is supported by NSF Grant DMR-1306410 and NSERC.

  13. Arc-An OAI Service Provider for Digital Library Federation; Kepler-An OAI Data/Service Provider for the Individual; Information Objects and Rights Management: A Mediation-Based Approach to DRM Interoperability; Automated Name Authority Control and Enhanced Searching in the Levy Collection; Renardus Project Developments and the Wider Digital Library Context.

    ERIC Educational Resources Information Center

    Liu, Xiaoming; Maly, Kurt; Zubair, Mohammad; Nelson, Michael L.; Erickson, John S.; DiLauro, Tim; Choudhury, G. Sayeed; Patton, Mark; Warner, James W.; Brown, Elizabeth W.; Heery, Rachel; Carpenter, Leona; Day, Michael

    2001-01-01

    Includes five articles that discuss the OAI (Open Archive Initiative), an interface between data providers and service providers; information objects and digital rights management interoperability; digitizing library collections, including automated name authority control, metadata, and text searching engines; and building digital library services…

  14. Solving a Health Information Management Problem. An international success story.

    PubMed

    Hannan, Terry J

    2015-01-01

    The management of health care delivery requires the availability of effective 'information management' tools based on e-technologies [eHealth]. In developed economies many of these 'tools' are readily available whereas in Low and Middle Income Countries (LMIC) there is limited access to eHealth technologies and this has been defined as the "digital divide". This paper provides a short introduction to the fundamental understanding of what is meant by information management in health care and how it applies to all social economies. The core of the paper describes the successful implementation of appropriate information management tools in a resource poor environment to manage the HIV/AIDS epidemic and other disease states, in sub-Saharan Africa and how the system has evolved to become the largest open source eHealth project in the world and become the health information infrastructure for several national eHealth economies. The system is known as Open MRS [www.openmrs.org). The continuing successful evolution of the OpenMRS project has permitted its key implementers to define core factors that are the foundations for successful eHealth projects.

  15. Review of digital holography reconstruction methods

    NASA Astrophysics Data System (ADS)

    Dovhaliuk, Rostyslav Yu.

    2018-01-01

    Development of digital holography opened new ways of both transparent and opaque objects non-destructive study. In this paper, a digital hologram reconstruction process is investigated. The advantages and limitations of common wave propagation methods are discussed. The details of a software implementation of a digital hologram reconstruction methods are presented. Finally, the performance of each wave propagation method is evaluated, and recommendations about possible use cases for each of them are given.

  16. State-of-the-Art in Open Courseware Initiatives Worldwide

    ERIC Educational Resources Information Center

    Vladoiu, Monica

    2011-01-01

    We survey here the state-of-the-art in open courseware initiatives worldwide. First, the MIT OpenCourseWare project is overviewed, as it has been the real starting point of the OCW movement. Usually, open courseware refers to a free and open digital publication of high quality university level educational materials that are organized as courses,…

  17. Knowledge and Processes That Predict Proficiency in Digital Literacy

    ERIC Educational Resources Information Center

    Bulger, Monica E.; Mayer, Richard E.; Metzger, Miriam J.

    2014-01-01

    Proficiency in digital literacy refers to the ability to read and write using online sources, and includes the ability to select sources relevant to the task, synthesize information into a coherent message, and communicate the message with an audience. The present study examines the determinants of digital literacy proficiency by asking 150…

  18. Echolocation signals of wild Atlantic spotted dolphin (Stenella frontalis)

    NASA Astrophysics Data System (ADS)

    Au, Whitlow W. L.; Herzing, Denise L.

    2003-01-01

    An array of four hydrophones arranged in a symmetrical star configuration was used to measure the echolocation signals of the Atlantic spotted dolphin (Stenella frontalis) in the Bahamas. The spacing between the center hydrophone and the other hydrophones was 45.7 cm. A video camera was attached to the array and a video tape recorder was time synchronized with the computer used to digitize the acoustic signals. The echolocation signals had bi-modal frequency spectra with a low-frequency peak between 40 and 50 kHz and a high-frequency peak between 110 and 130 kHz. The low-frequency peak was dominant when the signal the source level was low and the high-frequency peak dominated when the source level was high. Peak-to-peak source levels as high as 210 dB re 1 μPa were measured. The source level varied in amplitude approximately as a function of the one-way transmission loss for signals traveling from the animals to the array. The characteristics of the signals were similar to those of captive Tursiops truncatus, Delphinapterus leucas and Pseudorca crassidens measured in open waters under controlled conditions.

  19. Fused off-axis object illumination direct-to-digital holography with a plurality of illumination sources

    DOEpatents

    Price, Jeffery R.; Bingham, Philip R.

    2005-11-08

    Systems and methods are described for rapid acquisition of fused off-axis illumination direct-to-digital holography. A method of recording a plurality of off-axis object illuminated spatially heterodyne holograms, each of the off-axis object illuminated spatially heterodyne holograms including spatially heterodyne fringes for Fourier analysis, includes digitally recording, with a first illumination source of an interferometer, a first off-axis object illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis; and digitally recording, with a second illumination source of the interferometer, a second off-axis object illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis.

  20. A front-end readout Detector Board for the OpenPET electronics system

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Abu-Nimeh, F.; Moses, W. W.; Peng, Q.; Vu, C. Q.; Wu, J.-Y.

    2015-08-01

    We present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, which allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is ``time stamped'' by a time-to-digital converter (TDC) implemented inside the FPGA . This digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.

  1. A front-end readout Detector Board for the OpenPET electronics system

    DOE PAGES

    Choong, W. -S.; Abu-Nimeh, F.; Moses, W. W.; ...

    2015-08-12

    Here, we present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, whichmore » allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is "time stamped" by a time-to-digital converter (TDC) implemented inside the FPGA. In conclusion, this digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.« less

  2. An introduction to the special issue on Geoscience Papers of the Future

    NASA Astrophysics Data System (ADS)

    David, Cédric H.; Gil, Yolanda; Duffy, Christopher J.; Peckham, Scott D.; Venayagamoorthy, S. Karan

    2016-10-01

    Advocates of enhanced quality for published scientific results are increasingly voicing the need for further transparency of data and software for scientific reproducibility. However, such advanced digital scholarship can appear perplexing to geoscientists that are seduced by the concept of open science yet wonder about the exact mechanics and implications of the associated efforts. This special issue of Earth and Space Science entitled "Geoscience Papers of the Future" includes a review of existing best practices for digital scholarship and bundles a set of example articles that share their digital research products and reflect on the process of opening their scientific approach in a common quest for reproducible science.

  3. From Excavations to Web: a GIS for Archaeology

    NASA Astrophysics Data System (ADS)

    D'Urso, M. G.; Corsi, E.; Nemeti, S.; Germani, M.

    2017-05-01

    The study and the protection of Cultural Heritage in recent years have undergone a revolution about the search tools and the reference disciplines. The technological approach to the problem of the collection, organization and publication of archaeological data using GIS software has completely changed the essence of the traditional methods of investigation, paving the way to the development of several application areas, up to the Cultural Resource Management. A relatively recent specific sector of development for archaeological GIS development sector is dedicated to the intra - site analyses aimed to recording, processing and display information obtained during the excavations. The case - study of the archaeological site located in the south - east of San Pietro Vetere plateau in Aquino, in the Southern Lazio, is concerned with the illustration of a procedure describing the complete digital workflow relative to an intra-site analysis of an archaeological dig. The GIS project implementation and its publication on the web, thanks to several softwares, particularly the FOSS (Free Open Source Software) Quantum - GIS, are an opportunity to reflect on the strengths and the critical nature of this particular application of the GIS technology. For future developments in research it is of fundamental importance the identification of a digital protocol for processing of excavations (from the acquisition, cataloguing, up data insertion), also on account of a possible future Open Project on medieval Aquino.

  4. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  5. Lessons learnt from a MOOC about social media for digital health literacy.

    PubMed

    Atique, Suleman; Hosueh, Mowafa; Fernandez-Luque, Luis; Gabarron, Elia; Wan, Marian; Singh, Onkar; Traver Salcedo, Vicente; Li, Yu-Chuan Jack; Shabbir, Syed-Abdul

    2016-08-01

    Nowadays, the Internet and social media represent prime channels for health information seeking and peer support. However, benefits of health social media can be reduced by low digital health literacy. We designed a massive open online course (MOOC) course about health social media to increase the students' digital health literacy. In this course, we wanted to explore the difficulties confronted by the MOOC users in relation to accessing quality online health information and to propose methods to overcome the issues. An online survey was carried out to assess the students' digital health literacy. This survey was one of the activities for the enrolled learners in an online course entitled "Social Media in Health Care" on "FutureLearn", one of the popular MOOC platforms. The course was hosted by Taipei Medical University, Taiwan. Data from a total of 300 respondents were collected through the online survey from 14 December 2015 to 10 January 2016. Most participants (61%) considered finding online health information is easy or very easy, while 39% were unsure or found it difficult to retrieve online health information. Most (63%) were not sure about judging whether available information can be used for making health decisions. This study indicates a demand for more training to increase skills to improve the capability of health consumers to identify trustworthy, useful health information. More research to understand the health information seeking process will be crucial in identifying the skillsets that need to be further developed. MOOCs about digital health can be a great source of knowledge when it comes to studying patients' needs.

  6. Open-loop digital frequency multiplier

    NASA Technical Reports Server (NTRS)

    Moore, R. C.

    1977-01-01

    Monostable multivibrator is implemented by using digital integrated circuits where multiplier constant is too large for conventional phase-locked-loop integrated circuit. A 400 Hz clock is generated by divide-by-N counter from 1 Hz timing reference.

  7. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  8. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  9. An Examination of Open and Technology Leadership in Managerial Practices of Education System

    ERIC Educational Resources Information Center

    Akcil, Umut; Aksal, Fahriye Altinay; Mukhametzyanova, Farida Sh.; Gazi, Zehra Altinay

    2017-01-01

    In order for a smooth and problem-free transformation to take place in a digitalizing education system, efficient management is needed. Thus, educational managers need to improve their skills and develop behaviors suitable for taking education systems into the digital age. Social networks enable leaders to become digital citizens by embracing and…

  10. Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform

    NASA Astrophysics Data System (ADS)

    Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.

    2018-04-01

    To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.

  11. Accuracy of open magnetic resonance imaging for guiding injection of the equine deep digital flexor tendon within the hoof.

    PubMed

    Groom, Lauren M; White, Nathaniel A; Adams, M Norris; Barrett, Jennifer G

    2017-11-01

    Lesions of the distal deep digital flexor tendon (DDFT) are frequently diagnosed using MRI in horses with foot pain. Intralesional injection of biologic therapeutics shows promise in tendon healing; however, accurate injection of distal deep digital flexor tendon lesions within the hoof is difficult. The aim of this experimental study was to evaluate accuracy of a technique for injection of the deep digital flexor tendon within the hoof using MRI-guidance, which could be performed in standing patients. We hypothesized that injection of the distal deep digital flexor tendon within the hoof could be accurately guided using open low-field MRI to target either the lateral or medial lobe at a specific location. Ten cadaver limbs were positioned in an open, low-field MRI unit. Each distal deep digital flexor tendon lobe was assigned to have a proximal (adjacent to the proximal aspect of the navicular bursa) or distal (adjacent to the navicular bone) injection. A titanium needle was inserted into each tendon lobe, guided by T1-weighted transverse images acquired simultaneously during injection. Colored dye was injected as a marker and postinjection MRI and gross sections were assessed. The success of injection as evaluated on gross section was 85% (70% proximal, 100% distal). The success of injection as evaluated by MRI was 65% (60% proximal, 70% distal). There was no significant difference between the success of injecting the medial versus lateral lobe. The major limitation of this study was the use of cadaver limbs with normal tendons. The authors conclude that injection of the distal deep digital flexor tendon within the hoof is possible using MRI guidance. © 2017 American College of Veterinary Radiology.

  12. Designing a Resource Evolution Support System for Open Knowledge Communities

    ERIC Educational Resources Information Center

    Yang, Xianmin; Yu, Shengquan

    2015-01-01

    The continuous generation and evolution of digital learning resources is important for promoting open learning and meeting the personalized needs of learners. In the Web 2.0 era, open and collaborative authoring is becoming a popular method by which to create vast personalized learning resources in open knowledge communities (OKCs). However, the…

  13. Educators Assess "Open Content" Movement

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2009-01-01

    This article discusses the open-content movement in education. A small but growing movement of K-12 educators is latching on to educational resources that are "open," or free for others to use, change, and republish on web sites that promote sharing. The open-content movement is fueled partly by digital creation tools that make it easy…

  14. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    NASA Astrophysics Data System (ADS)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and it affords a previously unheard of potential for communicating the complexity of key exposures. For example, in studies of metamorphic structures we often search for days to find "Rosetta Stone" outcrops that display key geometric relationships. While conventional photographs rarely can capture the essence of the field exposure, capturing a true 3D representation of the exposure with multiple photos from many orientations can solve this communication problem. As spatial databases evolve these 3D models should be readily importable into the database.

  15. ROSA P : The National Transportation Library’s Repository and Open Science Access Portal

    DOT National Transportation Integrated Search

    2018-01-01

    The National Transportation Library (NTL) was founded as an all-digital repository of US DOT research reports, technical publications and data products. NTLs primary public offering is ROSA P, the Repository and Open Science Access Portal. An open...

  16. Open Badges for Education: What Are the Implications at the Intersection of Open Systems and Badging?

    ERIC Educational Resources Information Center

    Ahn, June; Pellicone, Anthony; Butler, Brian S.

    2014-01-01

    Badges have garnered great interest among scholars of digital media and learning. In addition, widespread initiatives such as Mozilla's Open Badge Framework expand the potential of badging into the realm of open education. In this paper, we explicate the concept of open badges. We highlight some of the ways that researchers have examined…

  17. Open source tracking and analysis of adult Drosophila locomotion in Buridan's paradigm with and without visual targets.

    PubMed

    Colomb, Julien; Reiter, Lutz; Blaszkiewicz, Jedrzej; Wessnitzer, Jan; Brembs, Bjoern

    2012-01-01

    Insects have been among the most widely used model systems for studying the control of locomotion by nervous systems. In Drosophila, we implemented a simple test for locomotion: in Buridan's paradigm, flies walk back and forth between two inaccessible visual targets [1]. Until today, the lack of easily accessible tools for tracking the fly position and analyzing its trajectory has probably contributed to the slow acceptance of Buridan's paradigm. We present here a package of open source software designed to track a single animal walking in a homogenous environment (Buritrack) and to analyze its trajectory. The Centroid Trajectory Analysis (CeTrAn) software is coded in the open source statistics project R. It extracts eleven metrics and includes correlation analyses and a Principal Components Analysis (PCA). It was designed to be easily customized to personal requirements. In combination with inexpensive hardware, these tools can readily be used for teaching and research purposes. We demonstrate the capabilities of our package by measuring the locomotor behavior of adult Drosophila melanogaster (whose wings were clipped), either in the presence or in the absence of visual targets, and comparing the latter to different computer-generated data. The analysis of the trajectories confirms that flies are centrophobic and shows that inaccessible visual targets can alter the orientation of the flies without changing their overall patterns of activity. Using computer generated data, the analysis software was tested, and chance values for some metrics (as well as chance value for their correlation) were set. Our results prompt the hypothesis that fixation behavior is observed only if negative phototaxis can overcome the propensity of the flies to avoid the center of the platform. Together with our companion paper, we provide new tools to promote Open Science as well as the collection and analysis of digital behavioral data.

  18. Managing research and surveillance projects in real-time with a novel open-source eManagement tool designed for under-resourced countries.

    PubMed

    Steiner, Andreas; Hella, Jerry; Grüninger, Servan; Mhalu, Grace; Mhimbira, Francis; Cercamondi, Colin I; Doulla, Basra; Maire, Nicolas; Fenner, Lukas

    2016-09-01

    A software tool is developed to facilitate data entry and to monitor research projects in under-resourced countries in real-time. The eManagement tool "odk_planner" is written in the scripting languages PHP and Python. The odk_planner is lightweight and uses minimal internet resources. It was designed to be used with the open source software Open Data Kit (ODK). The users can easily configure odk_planner to meet their needs, and the online interface displays data collected from ODK forms in a graphically informative way. The odk_planner also allows users to upload pictures and laboratory results and sends text messages automatically. User-defined access rights protect data and privacy. We present examples from four field applications in Tanzania successfully using the eManagement tool: 1) clinical trial; 2) longitudinal Tuberculosis (TB) Cohort Study with a complex visit schedule, where it was used to graphically display missing case report forms, upload digitalized X-rays, and send text message reminders to patients; 3) intervention study to improve TB case detection, carried out at pharmacies: a tablet-based electronic referral system monitored referred patients, and sent automated messages to remind pharmacy clients to visit a TB Clinic; and 4) TB retreatment case monitoring designed to improve drug resistance surveillance: clinicians at four public TB clinics and lab technicians at the TB reference laboratory used a smartphone-based application that tracked sputum samples, and collected clinical and laboratory data. The user friendly, open source odk_planner is a simple, but multi-functional, Web-based eManagement tool with add-ons that helps researchers conduct studies in under-resourced countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Visualizing research collections in the National Transportation Library's digital repository : ROSA P.

    DOT National Transportation Integrated Search

    2017-01-01

    The National Transportation Library's (NTL) Repository and Open Science Portal (ROSA P) : is a digital library for transportation, including U. S. Department of Transportation : sponsored research results and technical publications, other documents a...

  20. A flexible, open, decentralized system for digital pathology networks.

    PubMed

    Schuler, Robert; Smith, David E; Kumaraguruparan, Gowri; Chervenak, Ann; Lewis, Anne D; Hyde, Dallas M; Kesselman, Carl

    2012-01-01

    High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers.

  1. A Flexible, Open, Decentralized System for Digital Pathology Networks

    PubMed Central

    SMITH, David E.; KUMARAGURUPARAN, Gowri; CHERVENAK, Ann; LEWIS, Anne D.; HYDE, Dallas M.; KESSELMAN, Carl

    2014-01-01

    High-resolution digital imaging is enabling digital archiving and sharing of digitized microscopy slides and new methods for digital pathology. Collaborative research centers, outsourced medical services, and multi-site organizations stand to benefit from sharing pathology data in a digital pathology network. Yet significant technological challenges remain due to the large size and volume of digitized whole slide images. While information systems do exist for managing local pathology laboratories, they tend to be oriented toward narrow clinical use cases or offer closed ecosystems around proprietary formats. Few solutions exist for networking digital pathology operations. Here we present a system architecture and implementation of a digital pathology network and share results from a production system that federates major research centers. PMID:22941985

  2. Digital technologies in support of flood resilience: A case study from Nepal

    NASA Astrophysics Data System (ADS)

    Liu, Wei; McCallum, Ian; See, Linda; Dugar, Sumit; Laso-Bayas, Juan-Carlos

    2016-04-01

    This paper presents ongoing efforts to support flood resilience in the Karnali basin in Nepal through the provision of different forms of digital technology. Flood Risk Geo-Wiki is an online visualization and crowdsourcing tool, which has been adapted to display flood risk maps at the global scale as well as information of relevance to planners and the community at the local level. Community-based flood risk maps, which have traditionally been drawn on paper, are being digitized and integrated with OpenStreetMap to provide better access to this collective knowledge base. Mobile phones, using the GeoODK (Geographical Open Data Kit) questionnaire builder, are being deployed to collect georeferenced information on flood risks and vulnerability, which can be used to validate flood models and design action plans and strategies for coping with future flood events. These types of digital technologies are simple to implement yet together can help support flood prone communities.

  3. Micromechanical torsional digital-to-analog converter for open-loop angular positioning applications

    NASA Astrophysics Data System (ADS)

    Zhou, Guangya; Tay, Francis E. H.; Chau, Fook Siong; Zhao, Yi; Logeeswaran, VJ

    2004-05-01

    This paper reports a novel micromechanical torsional digital-to-analog converter (MTDAC), operated in open-loop with digitally controlled precise multi-level tilt angles. The MTDAC mechanism presented is analogous to that of an electrical binary-weighted-input digital-to-analog converter (DAC). It consists of a rigid tunable platform, an array of torsional microactuators, each operating in a two-state (on/off) mode, and a set of connection beams with binary-weighted torsional stiffnesses that connect the actuators to the platform. The feasibility of the proposed MTDAC mechanism was verified numerically by finite element simulations and experimentally with a commercial optical phase-shifting interferometric system. A prototype 2-bit MTDAC was implemented using the poly-MUMPS process achieving a full-scale output tilt angle of 1.92° with a rotation step of 0.64°. This mechanism can be configured for many promising applications, particularly in beam steering-based OXC switches.

  4. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  5. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  6. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  7. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  8. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  9. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  10. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  11. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser.

    PubMed

    Müller, R Dietmar; Qin, Xiaodong; Sandwell, David T; Dutkiewicz, Adriana; Williams, Simon E; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2016-01-01

    The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org) is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry.

  12. The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser

    PubMed Central

    Müller, R. Dietmar; Qin, Xiaodong; Sandwell, David T.; Dutkiewicz, Adriana; Williams, Simon E.; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2016-01-01

    The pace of scientific discovery is being transformed by the availability of ‘big data’ and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org) is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth’s gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry. PMID:26960151

  13. The Use of Digital Technologies across the Adult Life Span in Distance Education

    ERIC Educational Resources Information Center

    Jelfs, Anne; Richardson, John T. E.

    2013-01-01

    In June 2010, a survey was carried out to explore access to digital technology, attitudes to digital technology and approaches to studying across the adult life span in students taking courses with the UK Open University. In total, 7000 people were surveyed, of whom more than 4000 responded. Nearly all these students had access to a computer and…

  14. 77 FR 67688 - Advisory Committee on Reactor Safeguards (ACRS); Meeting of the ACRS Subcommittee on Digital I&C...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-13

    ... NUCLEAR REGULATORY COMMISSION Advisory Committee on Reactor Safeguards (ACRS); Meeting of the ACRS Subcommittee on Digital I&C; Notice of Meeting The ACRS Subcommittee on Digital I&C will hold a meeting on November 16, 2012, Room T-2B1, 11545 Rockville Pike, Rockville, Maryland. The entire meeting will be open...

  15. Ethics Issues of Digital Contents for Pre-Service Primary Teachers: A Gamification Experience for Self-Assessment with Socrative

    ERIC Educational Resources Information Center

    Pérez Garcias, Adolfina; Marín, Victoria I.

    2016-01-01

    The knowledge society has brought many possibilities for open education practices and, simultaneously, deep ethical challenges related to the use, sharing and reuse of digital content. In fact, even at university level, many undergraduate students do not respect the licences of digital resources. As part of the contents of a third-year educational…

  16. Lost Identity: The Assimilation of Digital Libraries into the Web

    ERIC Educational Resources Information Center

    Lagoze, Carl Jay

    2010-01-01

    The idea of Digital Libraries emerged in the early 1990s from a vision of a "library of the future", without walls and open 24 hours a day. These digital libraries would leverage the substantial investments of federal funding in the Internet and advanced computing for the benefit of the entire population. The world's knowledge would be a key press…

  17. Digital Distribution of Academic Journals and Its Impact on Scholarly Communication: Looking Back after 20 Years

    ERIC Educational Resources Information Center

    Solomon, David J.

    2013-01-01

    It has been approximately 20 years since distributing scholarly journals digitally became feasible. This article discusses the broad implications of the transition to digital distributed scholarship from a historical perspective and focuses on the development of open access (OA) and the various models for funding OA in the context of the roles…

  18. Digital Diversity: A Basic Tool with Lots of Uses

    ERIC Educational Resources Information Center

    Coy, Mary

    2006-01-01

    In this article the author relates how the digital camera has altered the way she teaches and the way her students learn. She also emphasizes the importance for teachers to have software that can edit, print, and incorporate photos. She cites several instances in which a digital camera can be used: (1) PowerPoint presentations; (2) Open house; (3)…

  19. Computer-assisted 3D kinematic analysis of all leg joints in walking insects.

    PubMed

    Bender, John A; Simpson, Elaine M; Ritzmann, Roy E

    2010-10-26

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.

  20. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  1. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  2. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes.

    PubMed

    Koulaouzidis, Anastasios; Iakovidis, Dimitris K; Yung, Diana E; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio

    2017-06-01

     Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE.  Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers.  The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %).  MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.

  3. pvsR: An Open Source Interface to Big Data on the American Political Sphere

    PubMed Central

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS’ data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS’ new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases. PMID:26132154

  4. Digital Scholarship and Open Access

    ERIC Educational Resources Information Center

    Losoff, Barbara; Pence, Harry E.

    2010-01-01

    Open access publications provide scholars with unrestricted access to the "conversation" that is the basis for the advancement of knowledge. The large number of open access journals, archives, and depositories already in existence demonstrates the technical and economic viability of providing unrestricted access to the literature that is the…

  5. 40 CFR Table 3 to Subpart Wwww of... - Organic HAP Emissions Limits for Existing Open Molding Sources, New Open Molding Sources Emitting...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and... CATEGORIES National Emissions Standards for Hazardous Air Pollutants: Reinforced Plastic Composites... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and...

  6. Real-time classification and sensor fusion with a spiking deep belief network.

    PubMed

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  7. TouchTerrain: A simple web-tool for creating 3D-printable topographic models

    NASA Astrophysics Data System (ADS)

    Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot

    2017-12-01

    An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/.

  8. MANTA--an open-source, high density electrophysiology recording suite for MATLAB.

    PubMed

    Englitz, B; David, S V; Sorenson, M D; Shamma, S A

    2013-01-01

    The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.

  9. MANTA—an open-source, high density electrophysiology recording suite for MATLAB

    PubMed Central

    Englitz, B.; David, S. V.; Sorenson, M. D.; Shamma, S. A.

    2013-01-01

    The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point. PMID:23653593

  10. The SCEC/UseIT Intern Program: Creating Open-Source Visualization Software Using Diverse Resources

    NASA Astrophysics Data System (ADS)

    Francoeur, H.; Callaghan, S.; Perry, S.; Jordan, T.

    2004-12-01

    The Southern California Earthquake Center undergraduate IT intern program (SCEC UseIT) conducts IT research to benefit collaborative earth science research. Through this program, interns have developed real-time, interactive, 3D visualization software using open-source tools. Dubbed LA3D, a distribution of this software is now in use by the seismic community. LA3D enables the user to interactively view Southern California datasets and models of importance to earthquake scientists, such as faults, earthquakes, fault blocks, digital elevation models, and seismic hazard maps. LA3D is now being extended to support visualizations anywhere on the planet. The new software, called SCEC-VIDEO (Virtual Interactive Display of Earth Objects), makes use of a modular, plugin-based software architecture which supports easy development and integration of new data sets. Currently SCEC-VIDEO is in beta testing, with a full open-source release slated for the future. Both LA3D and SCEC-VIDEO were developed using a wide variety of software technologies. These, which included relational databases, web services, software management technologies, and 3-D graphics in Java, were necessary to integrate the heterogeneous array of data sources which comprise our software. Currently the interns are working to integrate new technologies and larger data sets to increase software functionality and value. In addition, both LA3D and SCEC-VIDEO allow the user to script and create movies. Thus program interns with computer science backgrounds have been writing software while interns with other interests, such as cinema, geology, and education, have been making movies that have proved of great use in scientific talks, media interviews, and education. Thus, SCEC UseIT incorporates a wide variety of scientific and human resources to create products of value to the scientific and outreach communities. The program plans to continue with its interdisciplinary approach, increasing the relevance of the software and expanding its use in the scientific community.

  11. Photocopy of photograph (digital image located in LBNL Photo Lab ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-005). March 2005. PASSAGEWAY UNDER SOUTHEAST QUADRANT, AIR DUCT OPENINGS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA

  12. The Palladiolibrary Geo-Models AN Open 3d Archive to Manage and Visualize Information-Communication Resources about Palladio

    NASA Astrophysics Data System (ADS)

    Apollonio, F. I.; Baldissini, S.; Clini, P.; Gaiani, M.; Palestini, C.; Trevisan, C.

    2013-07-01

    The paper describes objectives, methods, procedures and outcomes of the development of the digital archive of Palladio works and documentation: the PALLADIOLibrary of Centro Internazionale di Studi di Architettura Andrea Palladio di Vicenza (CISAAP). The core of the application consists of fifty-one reality-based 3D models usable and navigable within a system grounded on GoogleEarth. This information system, a collaboration of four universities bearers of specific skills returns a comprehensive, structured and coherent semantic interpretation of Palladian landscape through shapes realistically reconstructed from historical sources and surveys and treated for GE with Ambient Occlusion techniques, overcoming the traditional display mode.

  13. Using game engine for 3D terrain visualisation of GIS data: A review

    NASA Astrophysics Data System (ADS)

    Che Mat, Ruzinoor; Shariff, Abdul Rashid Mohammed; Nasir Zulkifli, Abdul; Shafry Mohd Rahim, Mohd; Hafiz Mahayudin, Mohd

    2014-06-01

    This paper reviews on the 3D terrain visualisation of GIS data using game engines that are available in the market as well as open source. 3D terrain visualisation is a technique used to visualise terrain information from GIS data such as a digital elevation model (DEM), triangular irregular network (TIN) and contour. Much research has been conducted to transform the 2D view of map to 3D. There are several terrain visualisation softwares that are available for free, which include Cesium, Hftool and Landserf. This review paper will help interested users to better understand the current state of art in 3D terrain visualisation of GIS data using game engines.

  14. Reimagining the microscope in the 21(st) century using the scalable adaptive graphics environment.

    PubMed

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to "reimagine" the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology.

  15. Reimagining the microscope in the 21st century using the scalable adaptive graphics environment

    PubMed Central

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092

  16. Blind Leak Detection for Closed Systems

    NASA Technical Reports Server (NTRS)

    Oelgoetz, Peter; Johnson, Ricky; Todd, Douglas; Russell, Samuel; Walker, James

    2003-01-01

    The current inspection technique for locating interstitial leaking in the Space Shuttle Main Engine nozzles is the application of a liquid leak check solution in the openings where the interstitials space between the tubing and the structural jacket vent out the aft end of the nozzle, while its cooling tubes are pressurized to 25 psig with Helium. When a leak is found, it is classified, and if the leak is severe enough the suspect tube is cut open so that a boroscope can be inserted to find the leak point. Since the boroscope can only cover a finite tube length and since it is impossible to identify which tube (to the right or left of the identified interstitial) is leaking, many extra and undesired repairs have been made to fix just one leak. In certain instances when the interstitials are interlinked by poor braze bonding, many interstitials will show indications of leaking from a single source. What is desired is a technique that can identify the leak source so that a single repair can be performed. Dr, Samuel Russell and James Walker, both with NASA/MSFC have developed a thermographic inspection system that addresses a single repair approach. They have teamed with Boeing/Rocketdyne to repackage the inspection processes to be suitable to address full scale Shuttle development and flight hardware and implement the process at NASA centers. The methods and results presented address the thermographic identification of interstitial leaks in the Space Shuttle Main Engine nozzles. A highly sensitive digital infrared camera (capable of detecting a delta temperature difference of 0.025 C) is used to record the cooling effects associated with a leak source, such as a crack or pinhole, hidden within the nozzle wall by observing the inner hot wall surface as the nozzle is pressurized, These images are enhanced by digitally subtracting a thermal reference image taken before pressurization. The method provides a non-intrusive way of locating the tube that is leaking and the exact leak source position to within a very small axial distance. Many of the factors that influence the inspectability of the nozzle are addressed; including pressure rate, peak pressure, gas type, ambient temperature and surface preparation. Other applications for this thermographic inspection system are the Reinforced-Carbon-Carbon (RCC) leading edge of the Space Shuttle orbiter and braze joint integrity.

  17. One way Doppler Extractor. Volume 2: Digital VCO technique

    NASA Technical Reports Server (NTRS)

    Nossen, E. J.; Starner, E. R.

    1974-01-01

    A feasibility analysis and trade-offs for a one-way Doppler extractor using digital VCO techniques is presented. The method of Doppler measurement involves the use of a digital phase lock loop; once this loop is locked to the incoming signal, the precise frequency and hence the Doppler component can be determined directly from the contents of the digital control register. The only serious error source is due to internally generated noise. Techniques are presented for minimizing this error source and achieving an accuracy of 0.01 Hz in a one second averaging period. A number of digitally controlled oscillators were analyzed from a performance and complexity point of view. The most promising technique uses an arithmetic synthesizer as a digital waveform generator.

  18. Closed Loop Experiment Manager (CLEM)-An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments.

    PubMed

    Hazan, Hananel; Ziv, Noam E

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level.

  19. Closed Loop Experiment Manager (CLEM)—An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments

    PubMed Central

    Hazan, Hananel; Ziv, Noam E.

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level. PMID:29093659

  20. University Unbound! Higher Education in the Age of "Free"

    ERIC Educational Resources Information Center

    Harney, John O.

    2012-01-01

    Innovators and entrepreneurs are using technologies to make freely available the things for which universities charge significant money. MOOCs (massive open online courses), free online courses, lecture podcasts, low-cost off-the-shelf general education courses, online tutorials, digital collections of open learning resources, open badges--all are…

  1. Open Access

    ERIC Educational Resources Information Center

    Suber, Peter

    2012-01-01

    The Internet lets us share perfect copies of our work with a worldwide audience at virtually no cost. We take advantage of this revolutionary opportunity when we make our work "open access": digital, online, free of charge, and free of most copyright and licensing restrictions. Open access is made possible by the Internet and copyright-holder…

  2. Half-lives of 221Fr, 217At, 213Bi, 213Po and 209Pb from the 225Ac decay series.

    PubMed

    Suliman, G; Pommé, S; Marouli, M; Van Ammel, R; Stroh, H; Jobbágy, V; Paepen, J; Dirican, A; Bruchertseifer, F; Apostolidis, C; Morgenstern, A

    2013-07-01

    The half-lives of (221)Fr, (217)At, (213)Bi, (213)Po, and (209)Pb were measured by means of an ion-implanted planar Si detector for alpha and beta particles emitted from weak (225)Ac sources or from recoil sources, which were placed in a quasi-2π counting geometry. Recoil sources were prepared by collecting atoms from an open (225)Ac source onto a glass substrate. The (221)Fr and (213)Bi half-lives were determined by following the alpha particle emission rate of recoil sources as a function of time. Similarly, the (209)Pb half-life was determined from the beta particle count rate. The shorter half-lives of (217)At and (213)Po were deduced from delayed coincidence measurements on weak (225)Ac sources using digital data acquisition in list mode. The resulting values: T1/2((221)Fr)=4.806 (6) min, T1/2((217)At)=32.8 (3)ms, T1/2((213)Bi)=45.62 (6)min, T1/2((213)Po)=3.708 (8) μs, and T1/2((209)Pb)=3.232 (5)h were in agreement only with the best literature data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. "Freely Ye Have Received, Freely Give" (Matthew 10: 8)--How Giving Away Religious Digital Books Influences the Print Sales of Those Books

    ERIC Educational Resources Information Center

    Hilton, John, III.

    2010-01-01

    Lack of access prevents many from benefiting from educational resources. Digital technologies now enable educational resources, such as books, to be openly available to those with access to the Internet. This study examined the financial viability of a religious publisher's putting free digital versions of eight of its books on the Internet. The…

  4. Developing Conceptual Framework for Revising Self-Learning Materials (SLMs) of the Open School (OS) of Bangladesh Open University (BOU) at a Digital Environment

    ERIC Educational Resources Information Center

    Yeasmin, Sabina; Murthy, C. R. K.

    2011-01-01

    Bangladesh Open University (BOU) runs school programs as part of its academic activities through open schooling since its inception. As of today, the Open School uses the first generation self-learning materials (SLMs) written, before an era, following an in-house style and template. The concerned faculty member corrects, every year, texts before…

  5. Developing Conceptual Framework for Revising Self-Learning Materials (SLMs) of the Open School (OS) of Bangladesh Open University (BOU) at a Digital Environment

    ERIC Educational Resources Information Center

    Yeasmin, Sabina; Murthy, C. R. K.

    2012-01-01

    Bangladesh Open University (BOU) runs school programs as part of its academic activities through open schooling since its inception. As of today, the Open School uses the first generation self-learning materials (SLMs) written, before an era, following an in-house style and template. The concerned faculty member corrects, every year, texts before…

  6. Evaluating NTU's OpenCourseWare Project with Google Analytics: User Characteristics, Course Preferences, and Usage Patterns

    ERIC Educational Resources Information Center

    Sheu, Feng-Ru; Shih, Meilun

    2017-01-01

    As freely adoptable digital resources, OpenCourseWare (OCW) have become a prominent form of Open Educational Resources (OER). More than 275 institutions in the worldwide OCW consortium have committed to creating free access open course materials. Despite the resources and efforts to create OCW worldwide, little understanding of its use exists.…

  7. Photocopy of photograph (digital image located in LBNL Photo Lab ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-009). March 2005. OPENINGS OF AIR DUCTS INTO PASSAGEWAY UNDER SOUTHEAST QUADRANT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA

  8. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  9. High spatial resolution mapping of folds and fractures using Unmanned Aerial Vehicle (UAV) photogrammetry

    NASA Astrophysics Data System (ADS)

    Cruden, A. R.; Vollgger, S.

    2016-12-01

    The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution < 10 mm per pixel, and cm-scale location accuracy. Orientation data of brittle and ductile structures were semi-automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.

  10. 3D-information fusion from very high resolution satellite sensors

    NASA Astrophysics Data System (ADS)

    Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.

    2015-04-01

    In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.

  11. Development and programming of Geophonino: A low cost Arduino-based seismic recorder for vertical geophones

    NASA Astrophysics Data System (ADS)

    Soler-Llorens, J. L.; Galiana-Merino, J. J.; Giner-Caturla, J.; Jauregui-Eslava, P.; Rosa-Cintas, S.; Rosa-Herranz, J.

    2016-09-01

    The commercial data acquisition systems used for seismic exploration are usually expensive equipment. In this work, a low cost data acquisition system (Geophonino) has been developed for recording seismic signals from a vertical geophone. The signal goes first through an instrumentation amplifier, INA155, which is suitable for low amplitude signals like the seismic noise, and an anti-aliasing filter based on the MAX7404 switched-capacitor filter. After that, the amplified and filtered signal is digitized and processed by Arduino Due and registered in an SD memory card. Geophonino is configured for continuous registering, where the sampling frequency, the amplitude gain and the registering time are user-defined. The complete prototype is an open source and open hardware system. It has been tested by comparing the registered signals with the ones obtained through different commercial data recording systems and different kind of geophones. The obtained results show good correlation between the tested measurements, presenting Geophonino as a low-cost alternative system for seismic data recording.

  12. TACIT: An open-source text analysis, crawling, and interpretation tool.

    PubMed

    Dehghani, Morteza; Johnson, Kate M; Garten, Justin; Boghrati, Reihane; Hoover, Joe; Balasubramanian, Vijayan; Singh, Anurag; Shankar, Yuvarani; Pulickal, Linda; Rajkumar, Aswin; Parmar, Niki Jitendra

    2017-04-01

    As human activity and interaction increasingly take place online, the digital residues of these activities provide a valuable window into a range of psychological and social processes. A great deal of progress has been made toward utilizing these opportunities; however, the complexity of managing and analyzing the quantities of data currently available has limited both the types of analysis used and the number of researchers able to make use of these data. Although fields such as computer science have developed a range of techniques and methods for handling these difficulties, making use of those tools has often required specialized knowledge and programming experience. The Text Analysis, Crawling, and Interpretation Tool (TACIT) is designed to bridge this gap by providing an intuitive tool and interface for making use of state-of-the-art methods in text analysis and large-scale data management. Furthermore, TACIT is implemented as an open, extensible, plugin-driven architecture, which will allow other researchers to extend and expand these capabilities as new methods become available.

  13. Improving Land Cover Mapping: a Mobile Application Based on ESA Sentinel 2 Imagery

    NASA Astrophysics Data System (ADS)

    Melis, M. T.; Dessì, F.; Loddo, P.; La Mantia, C.; Da Pelo, S.; Deflorio, A. M.; Ghiglieri, G.; Hailu, B. T.; Kalegele, K.; Mwasi, B. N.

    2018-04-01

    The increasing availability of satellite data is a real value for the enhancement of environmental knowledge and land management. Possibilities to integrate different source of geo-data are growing and methodologies to create thematic database are becoming very sophisticated. Moreover, the access to internet services and, in particular, to web mapping services is well developed and spread either between expert users than the citizens. Web map services, like Google Maps or Open Street Maps, give the access to updated optical imagery or topographic maps but information on land cover/use - are not still provided. Therefore, there are many failings in the general utilization -non-specialized users- and access to those maps. This issue is particularly felt where the digital (web) maps could form the basis for land use management as they are more economic and accessible than the paper maps. These conditions are well known in many African countries where, while the internet access is becoming open to all, the local map agencies and their products are not widespread.

  14. Final Report for the Development of the NASA Technical Report Server (NTRS)

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.

    2005-01-01

    The author performed a variety of research, development and consulting tasks for NASA Langley Research Center in the area of digital libraries (DLs) and supporting technologies, such as the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In particular, the development focused on the NASA Technical Report Server (NTRS) and its transition from a distributed searching model to one that uses the OAI-PMH. The Open Archives Initiative (OAI) is an international consortium focused on furthering the interoperability of DLs through the use of "metadata harvesting". The OAI-PMH version of NTRS went into public production on April 28, 2003. Since that time, it has been extremely well received. In addition to providing the NTRS user community with a higher level of service than the previous, distributed searching version of NTRS, it has provided more insight into how the user community uses NTRS in a variety of deployment scenarios. This report details the design, implementation and maintenance of the NTRS. Source code is included in the appendices.

  15. Open Source Radiation Hardened by Design Technology

    NASA Technical Reports Server (NTRS)

    Shuler, Robert

    2016-01-01

    The proposed technology allows use of the latest microcircuit technology with lowest power and fastest speed, with minimal delay and engineering costs, through new Radiation Hardened by Design (RHBD) techniques that do not require extensive process characterization, technique evaluation and re-design at each Moore's Law generation. The separation of critical node groups is explicitly parameterized so it can be increased as microcircuit technologies shrink. The technology will be open access to radiation tolerant circuit vendors. INNOVATION: This technology would enhance computation intensive applications such as autonomy, robotics, advanced sensor and tracking processes, as well as low power applications such as wireless sensor networks. OUTCOME / RESULTS: 1) Simulation analysis indicates feasibility. 2)Compact voting latch 65 nanometer test chip designed and submitted for fabrication -7/2016. INFUSION FOR SPACE / EARTH: This technology may be used in any digital integrated circuit in which a high level of resistance to Single Event Upsets is desired, and has the greatest benefit outside low earth orbit where cosmic rays are numerous.

  16. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    NASA Astrophysics Data System (ADS)

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  17. OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale

    NASA Astrophysics Data System (ADS)

    Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason

    2015-03-01

    The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.

  18. An Examination of Teachers' Ratings of Lesson Plans Using Digital Primary Sources

    ERIC Educational Resources Information Center

    Milman, Natalie B.; Bondie, Rhonda

    2012-01-01

    This mixed method study examined teachers' ratings of 37 field-tested social studies lesson plans that incorporated digital primary sources through a grant from the Library of Congress Teaching with Primary Sources program for K-12 teachers. Each lesson, available in an online teaching materials collection, was field-tested and reviewed by at…

  19. AWOB: A Collaborative Workbench for Astronomers

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lemson, G.; Bulatovic, N.; Makarenko, V.; Vogler, A.; Voges, W.; Yao, Y.; Kiefl, R.; Koychev, S.

    2015-09-01

    We present the Astronomers Workbench (AWOB1), a web-based collaboration and publication platform for a scientific project of any size, developed in collaboration between the Max-Planck institutes of Astrophysics (MPA) and Extra-terrestrial Physics (MPE) and the Max-Planck Digital Library (MPDL). AWOB facilitates the collaboration between geographically distributed astronomers working on a common project throughout its whole scientific life cycle. AWOB does so by making it very easy for scientists to set up and manage a collaborative workspace for individual projects, where data can be uploaded and shared. It supports inviting project collaborators, provides wikis, automated mailing lists, calendars and event notification and has a built in chat facility. It allows the definition and tracking of tasks within projects and supports easy creation of e-publications for the dissemination of data and images and other resources that cannot be added to submitted papers. AWOB extends the project concept to larger scale consortia, within which it is possible to manage working groups and sub-projects. The existing AWOB instance has so far been limited to Max-Planck members and their collaborators, but will be opened to the whole astronomical community. AWOB is an open-source project and its source code is available upon request. We intend to extend AWOB's functionality also to other disciplines, and would greatly appreciate contributions from the community.

  20. Informatics in radiology: web-based preliminary reporting system for radiology residents with PACS integration.

    PubMed

    O'Connell, Timothy; Chang, Debra

    2012-01-01

    While on call, radiology residents review imaging studies and issue preliminary reports to referring clinicians. In the absence of an integrated reporting system at the training sites of the authors' institution, residents were typing and faxing preliminary reports. To partially automate the on-call resident workflow, a Web-based system for resident reporting was developed by using the free open-source xAMP Web application framework and an open-source DICOM (Digital Imaging and Communications in Medicine) software toolkit, with the goals of reducing errors and lowering barriers to education. This reporting system integrates with the picture archiving and communication system to display a worklist of studies. Patient data are automatically entered in the preliminary report to prevent identification errors and simplify the report creation process. When the final report for a resident's on-call study is available, the reporting system queries the report broker for the final report, and then displays the preliminary report side by side with the final report, thus simplifying the review process and encouraging review of all of the resident's reports. The xAMP Web application framework should be considered for development of radiology department informatics projects owing to its zero cost, minimal hardware requirements, ease of programming, and large support community.

  1. Back-island and open-ocean shorelines, and sand areas of Assateague Island, Maryland and Virginia, April 12, 1989, to September 5, 2013

    USGS Publications Warehouse

    Guy, Kristy K.

    2015-01-01

    This Data Series Report includes several open-ocean shorelines, back-island shorelines, back-island shoreline points, sand area polygons, and sand lines for Assateague Island that were extracted from natural-color orthoimagery (aerial photography) dated from April 12, 1989, to September 5, 2013. The images used were 0.3–2-meter (m)-resolution U.S. Geological Survey Digital Orthophoto Quarter Quads (DOQQ), U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) images, and Virginia Geographic Information Network Virginia Base Map Program (VBMP) images courtesy of the Commonwealth of Virginia. The back-island shorelines were hand-digitized at the intersect of the apparent back-island shoreline and transects spaced at 20-m intervals. The open-ocean shorelines were hand-digitized at the approximate still water level, such as tide level, which was fit through the average position of waves and swash apparent on the beach. Hand-digitizing was done at a scale of approximately 1:2,000. The sand polygons were derived by using an image-processing unsupervised classification technique that separates images into classes. The classes were then visually categorized as either sand or not sand. Also included in this report are 20-m-spaced transect lines and the transect base lines.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choong, W. -S.; Abu-Nimeh, F.; Moses, W. W.

    Here, we present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, whichmore » allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is "time stamped" by a time-to-digital converter (TDC) implemented inside the FPGA. In conclusion, this digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.« less

  3. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…

  4. A Study of Clinically Related Open Source Software Projects

    PubMed Central

    Hogarth, Michael A.; Turner, Stuart

    2005-01-01

    Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the ‘fit’ of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056

  5. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  6. A New Database of Digitized Regional Seismic Waveforms from Nuclear Explosions in Eurasia

    NASA Astrophysics Data System (ADS)

    Sokolova, I. N.; Richards, P. G.; Kim, W. Y.; Mikhailova, N. N.

    2014-12-01

    Seismology is an observational science. Hence, the effort to understand details of seismic signals from underground nuclear explosions requires analysis of waveforms recorded from past nuclear explosions. Of principal interest, are regional signals from explosions too small to be reliably identified via teleseismic recording. But the great majority of stations operated today, even those in networks for nuclear explosion monitoring, have never recorded explosion signals at regional distances, because most stations were installed long after the period when most underground nuclear explosions were conducted; and the few nuclear explosions since the early 1990s were mostly recorded only at teleseismic distances. We have therefore gathered thousands of nuclear explosion regional seismograms from more than 200 analog stations operated in the former Soviet Union. Most of them lie in a region stretching approximately 6000 km East-West and 2000 km North-South and including much of Central Asia. We have digitized them and created a modern digital database, including significant metadata. Much of this work has been done in Kazakhstan. Most of the explosions were underground, but several were conducted in the atmosphere. This presentation will characterize the content and overall quality of the new database for signals from nuclear explosions in Eurasia, which were conducted across substantial ranges of yield and shot-point depth, and under a great variety of different geological conditions. This work complements a 20-year collaborative effort which made the original digital recordings of the Borovoye Geophysical Observatory, Kazakhstan, openly available in a modern format (see http://www.ldeo.columbia.edu/res/pi/Monitoring/Data/). For purposes of characterizing explosive sources, it would be of assistance to have seismogram archives from explosions conducted in all regions including the Pacific, North Africa, and the United States (including the Aleutians). Openly available seismogram archives for Eurasian explosions are in several respects now better than those for explosions conducted by the United States, France, and the UK, especially for the era from 1960 to about 1985. The opportunity to build and improve such archives will not last indefinitely.

  7. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  8. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  9. New Open-Source Version of FLORIS Released | News | NREL

    Science.gov Websites

    New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL

  10. Photocopy of photograph (digital image located in LBNL Photo Lab ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-031). March 2005. MOUSE AT EAST TANGENT, WITH COVER OPEN, LOOKING TOWARD CENTER IGLOO, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA

  11. Towards the Crowdsourcing of Massive Smartphone Assisted-GPS Sensor Ground Observations for the Production of Digital Terrain Models

    PubMed Central

    Massad, Ido

    2018-01-01

    Digital Terrain Models (DTMs) used for the representation of the bare earth are produced from elevation data obtained using high-end mapping platforms and technologies. These require the handling of complex post-processing performed by authoritative and commercial mapping agencies. In this research, we aim to exploit user-generated data to produce DTMs by handling massive volumes of position and elevation data collected using ubiquitous smartphone devices equipped with Assisted-GPS sensors. As massive position and elevation data are collected passively and straightforwardly by pedestrians, cyclists, and drivers, it can be transformed into valuable topographic information. Specifically, in dense and concealed built and vegetated areas, where other technologies fail, handheld devices have an advantage. Still, Assisted-GPS measurements are not as accurate as high-end technologies, requiring pre- and post-processing of observations. We propose the development and implementation of a 2D Kalman filter and smoothing on the acquired crowdsourced observations for topographic representation production. When compared to an authoritative DTM, results obtained are very promising in producing good elevation values. Today, open-source mapping infrastructures, such as OpenStreetMap, rely primarily on the global authoritative SRTM (Shuttle Radar Topography Mission), which shows similar accuracy but inferior resolution when compared to the results obtained in this research. Accordingly, our crowdsourced methodology has the capacity for reliable topographic representation production that is based on ubiquitous volunteered user-generated data. PMID:29562627

  12. Knowledge Infrastructures and the Inscrutability of Openness in Education

    ERIC Educational Resources Information Center

    Edwards, Richard

    2015-01-01

    Openness has a long genealogy in education. Whether through the use of post, radio, television and digital technologies, extending learning opportunities to more and a wider range of people has been a significant aspect of educational history. Transcending barriers to learning has been promoted as the means of opening educational opportunities in…

  13. Studying Open versus Traditional Textbook Effects on Students' Course Performance: Confounds Abound

    ERIC Educational Resources Information Center

    Griggs, Richard A.; Jackson, Sherri L.

    2017-01-01

    To combat the high cost of textbooks, open (digitally free) textbooks have recently entered the textbook market. Griggs and Jackson (2017) reviewed the open introductory psychology textbooks presently available to provide interested teachers with essential information about these texts and how they compare with traditional (commercial)…

  14. Posthumanism and the MOOC: Opening the Subject of Digital Education

    ERIC Educational Resources Information Center

    Knox, Jeremy

    2016-01-01

    As the most prominent initiative in the open education movement, the Massive Open Online Course (MOOC) is often claimed to disrupt established educational models through the use of innovative technologies that overcome geographic and economic barriers to higher education. However, this paper suggests that the MOOC project, as a typical example of…

  15. Open Textbooks and Increased Student Access and Outcomes

    ERIC Educational Resources Information Center

    Feldstein, Andrew; Martin, Mirta; Hudson, Amy; Warren, Kiara; Hilton, John, III; Wiley, David

    2012-01-01

    This study reports findings from a year-long pilot study during which 991 students in 9 core courses in the Virginia State University School of Business replaced traditional textbooks with openly licensed books and other digital content. The university made a deliberate decision to use open textbooks that were copyrighted under the Creative…

  16. Creating an EPICS Based Test Stand Development System for a BPM Digitizer of the Linac Coherent Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-06-22

    The Linac Coherent Light Source (LCLS) is required to deliver a high quality electron beam for producing coherent X-rays. As a result, high resolution beam position monitoring is required. The Beam Position Monitor (BPM) digitizer acquires analog signals from the beam line and digitizes them to obtain beam position data. Although Matlab is currently being used to test the BPM digitizer?s functions and capability, the Controls Department at SLAC prefers to use Experimental Physics and Industrial Control Systems (EPICS). This paper discusses the transition of providing similar as well as enhanced functionalities, than those offered by Matlab, to test themore » digitizer. Altogether, the improved test stand development system can perform mathematical and statistical calculations with the waveform signals acquired from the digitizer and compute the fast Fourier transform (FFT) of the signals. Finally, logging of meaningful data into files has been added.« less

  17. Tectonic Storytelling with Open Source and Digital Object Identifiers - a case study about Plate Tectonics and the Geopark Bergstraße-Odenwald

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Barmuta, Jan; Klump, Jens; Neumann, Janna; Plank, Margret

    2014-05-01

    The communication of advances in research to the common public for both education and decision making is an important aspect of scientific work. An even more crucial task is to gain recognition within the scientific community, which is judged by impact factor and citation counts. Recently, the latter concepts have been extended from textual publications to include data and software publications. This paper presents a case study for science communication and data citation. For this, tectonic models, Free and Open Source Software (FOSS), best practices for data citation and a multimedia online-portal for scientific content are combined. This approach creates mutual benefits for the stakeholders: Target audiences receive information on the latest research results, while the use of Digital Object Identifiers (DOI) increases the recognition and citation of underlying scientific data. This creates favourable conditions for every researcher as DOI names ensure citeability and long term availability of scientific research. In the developed application, the FOSS tool for tectonic modelling GPlates is used to visualise and manipulate plate-tectonic reconstructions and associated data through geological time. These capabilities are augmented by the Science on a Halfsphere project (SoaH) with a robust and intuitive visualisation hardware environment. The tectonic models used for science communication are provided by the AGH University of Science and Technology. They focus on the Silurian to Early Carboniferous evolution of Central Europe (Bohemian Massif) and were interpreted for the area of the Geopark Bergstraße Odenwald based on the GPlates/SoaH hardware- and software stack. As scientific story-telling is volatile by nature, recordings are a natural means of preservation for further use, reference and analysis. For this, the upcoming portal for audiovisual media of the German National Library of Science and Technology TIB is expected to become a critical service infrastructure. It allows complex search queries, including metadata such as DOI and media fragment identifiers (MFI), thereby linking data citation and science communication.

  18. Moocs - a Force to BE Reckoned with or a Temporary Phenomenon

    NASA Astrophysics Data System (ADS)

    Koenig, G.

    2015-05-01

    The digital revolution has dramatically changed our everyday life. Using the Internet has evolved into a key technology that became an indispensable information source. The expansion of Internet usage beyond mere information storage to a learning and communication tool sets new standards for the development of educational concepts. Digital textbooks, multimedia tutorials, elearning offers using learning management systems and massive open online courses (MOOCs) demonstrate the development phases of new strategies for knowledge transfer. Initially starting in the USA MOOC platforms like Udacity, Coursera or edX had gained an enormous media attention caused by the huge number of participants. Initially this new teaching method was welcomed euphorically; the didactic preparation of courses is however viewed with scepticism, particularly in Europe. This paper will review the status of MOOCs, with a particular emphasis on Photogrammetry, Remote Sensing, and Geomatics. A selection of these 'Geo-MOOCs' will be presented. The consideration of these free online learning resources will include a commentary on quality and perceived effectiveness. Finally it will be outlined if MOOCs are reasonable and promising in our fields.

  19. Bathymetric map of the south part of Great Salt Lake, Utah, 2005

    USGS Publications Warehouse

    Baskin, Robert L.; Allen, David V.

    2005-01-01

    The U.S. Geological Survey, in cooperation with the Utah Department of Natural Resources, Division of Wildlife Resources, collected bathymetric data for the south part of Great Salt Lake during 2002–04 using a single beam, high-definition fathometer and real-time differential global positioning system. Approximately 7.6 million depth readings were collected along more than 1,050 miles of survey transects for construction of this map. Sound velocities were obtained in conjunction with the bathymetric data to provide time-of-travel corrections to the depth calculations. Data were processed with commercial hydrographic software and exported into geographic information system (GIS) software for mapping. Because of the shallow nature of the lake and the limitations of the instrumentation, contours above an altitude of 4,193 feet were digitized from existing USGS 1:24,000 source-scale digital line graph data.For additional information on methods used to derive the bathymetric contours for this map, please see Baskin, Robert L., 2005, Calculation of area and volume for the south part of Great Salt Lake, Utah, U.S. Geological Survey Open-File Report OFR–2005–1327.

  20. A metadata-aware application for remote scoring and exchange of tissue microarray images

    PubMed Central

    2013-01-01

    Background The use of tissue microarrays (TMA) and advances in digital scanning microscopy has enabled the collection of thousands of tissue images. There is a need for software tools to annotate, query and share this data amongst researchers in different physical locations. Results We have developed an open source web-based application for remote scoring of TMA images, which exploits the value of Microsoft Silverlight Deep Zoom to provide a intuitive interface for zooming and panning around digital images. We use and extend existing XML-based standards to ensure that the data collected can be archived and that our system is interoperable with other standards-compliant systems. Conclusion The application has been used for multi-centre scoring of TMA slides composed of tissues from several Phase III breast cancer trials and ten different studies participating in the International Breast Cancer Association Consortium (BCAC). The system has enabled researchers to simultaneously score large collections of TMA and export the standardised data to integrate with pathological and clinical outcome data, thereby facilitating biomarker discovery. PMID:23635078

  1. An Open Source Low-Cost Automatic System for Image-Based 3d Digitization

    NASA Astrophysics Data System (ADS)

    Menna, F.; Nocerino, E.; Morabito, D.; Farella, E. M.; Perini, M.; Remondino, F.

    2017-11-01

    3D digitization of heritage artefacts, reverse engineering of industrial components or rapid prototyping-driven design are key topics today. Indeed, millions of archaeological finds all over the world need to be surveyed in 3D either to allow convenient investigations by researchers or because they are inaccessible to visitors and scientists or, unfortunately, because they are seriously endangered by wars and terrorist attacks. On the other hand, in case of industrial and design components there is often the need of deformation analyses or physical replicas starting from reality-based 3D digitisations. The paper is aligned with these needs and presents the realization of the ORION (arduinO Raspberry pI rOtating table for image based 3D recostructioN) prototype system, with its hardware and software components, providing critical insights about its modular design. ORION is an image-based 3D reconstruction system based on automated photogrammetric acquisitions and processing. The system is being developed under a collaborative educational project between FBK Trento, the University of Trento and internship programs with high school in the Trentino province (Italy).

  2. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    PubMed

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  3. Out of the archaeologist's desk drawer: communicating archaeological data online

    NASA Astrophysics Data System (ADS)

    Abate, D.; David, M.

    2015-08-01

    During archaeological field work a huge amount of data is collected, processed and elaborated for further studies and scientific publications. However, access and communication of linked data; associated tools for interrogation, analysis and sharing are often limited at the first stage of the archaeological research, mainly due to issues related to IPR. Information is often released months if not years after the fieldwork. Nowadays great deal of archaeological data is `born digital' in the field or lab. This means databases, pictures and 3D models of finds and excavation contexts could be available for public communication and sharing. Researchers usually restrict access to their data to a small group of people. It follows that data sharing is not so widespread among archaeologists, and dissemination of research is still mostly based on traditional pre-digital means like scientific papers, journal articles and books. This project has implemented a web approach for sharing and communication purposes, exploiting mainly open source technologies which allow a high level of interactivity. The case study presented is the newly Mithraeum excavated in Ostia Antica archaeological site in the framework of the Ostia Marina Project.

  4. Google Sky: A Digital View of the Night Sky

    NASA Astrophysics Data System (ADS)

    Connolly, A. Scranton, R.; Ornduff, T.

    2008-11-01

    From its inception Astronomy has been a visual science, from careful observations of the sky using the naked eye, to the use of telescopes and photographs to map the distribution of stars and galaxies, to the current era of digital cameras that can image the sky over many decades of the electromagnetic spectrum. Sky in Google Earth (http://earth.google.com) and Google Sky (http://www.google.com/sky) continue this tradition, providing an intuitive visual interface to some of the largest astronomical imaging surveys of the sky. Streaming multi-color imagery, catalogs, time domain data, as well as annotating interesting astronomical sources and events with placemarks, podcasts and videos, Sky provides a panchromatic view of the universe accessible to anyone with a computer. Beyond a simple exploration of the sky Google Sky enables users to create and share content with others around the world. With an open interface available on Linux, Mac OS X and Windows, and translations of the content into over 20 different languages we present Sky as the embodiment of a virtual telescope for discovery and sharing the excitement of astronomy and science as a whole.

  5. 3D Printing of Biomolecular Models for Research and Pedagogy

    PubMed Central

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-01-01

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403

  6. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  7. Digital vs. conventional full-arch implant impressions: a comparative study.

    PubMed

    Amin, Sarah; Weber, Hans Peter; Finkelman, Matthew; El Rafie, Khaled; Kudara, Yukio; Papaspyridakos, Panos

    2017-11-01

    To test whether or not digital full-arch implant impressions with two different intra-oral scanners (CEREC Omnicam and True Definition) have the same accuracy as conventional ones. The hypothesis was that the splinted open-tray impressions would be more accurate than digital full-arch impressions. A stone master cast representing an edentulous mandible using five internal connection implant analogs (Straumann Bone Level RC, Basel, Switzerland) was fabricated. The three median implants were parallel to each other, the far left implant had 10°, and the far right had 15° distal angulation. A splinted open-tray technique was used for the conventional polyether impressions (n = 10) for Group 1. Digital impressions (n = 10) were taken with two intra-oral optical scanners (CEREC Omnicam and 3M True Definition) after connecting polymer scan bodies to the master cast for groups 2 and 3. Master cast and conventional impression test casts were digitized with a high-resolution reference scanner (Activity 880 scanner; Smart Optics, Bochum, Germany) to obtain digital files. Standard tessellation language (STL) datasets from the three test groups of digital and conventional impressions were superimposed with the STL dataset from the master cast to assess the 3D deviations. Deviations were recorded as root-mean-square error. To compare the master cast with conventional and digital impressions at the implant level, Welch's F-test was used together with Games-Howell post hoc test. Group I had a mean value of 167.93 μm (SD 50.37); Group II (Omnicam) had a mean value of 46.41 μm (SD 7.34); Group III (True Definition) had a mean value of 19.32 μm (SD 2.77). Welch's F-test was used together with the Games-Howell test for post hoc comparisons. Welch's F-test showed a significant difference between the groups (P < 0.001). The Games-Howell test showed statistically significant 3D deviations for all three groups (P < 0.001). Full-arch digital implant impressions using True Definition scanner and Omnicam were significantly more accurate than the conventional impressions with the splinted open-tray technique. Additionally, the digital impressions with the True Definition scanner had significantly less 3D deviations when compared with the Omnicam. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. The Value of Open Geographical Data - The Danish Case

    NASA Astrophysics Data System (ADS)

    Colding, T. S.; Folner, M.; Krarup, S.; Kongsbak, J.

    2013-12-01

    Good basic data for everyone is part of the common public-sector digitization strategy for 2011 to 2015. The vision is that basic data is to be the high-quality common foundation for public sector administration; efficiently updated at one place, and used by everyone - including the private sector. Open basic data will benefit public-sector efficiency as well as innovation and value creation by Danish society in general. With basic data as a new digital raw material, commercial products can be developed and public information and services can be improved, providing for greater insight and stronger democracy. On the first of January 2013 Denmark released this digital raw material. As a general rule, all basic data is to be made freely available to all public authorities, private businesses and individuals. This makes basic data a common digital resource, which can be exploited freely for commercial as well as non-commercial purposes. A positive business case contributed in convincing Danish politicians to approve the basic data program. Once the initiatives have been fully implemented, the revenues for society are expected to be approx. DKK 800 million annually. Private-sector revenues will be up to DKK half a billion annually, and it is expected that e.g. the real estate, insurance, financial, and telecom sectors, as well as GPS (sat-nav) manufacturers, public companies and entrepreneurs will be among those to benefit hugely from the initiatives. The financial gain for the private sector of open geographical data alone is expected to be approx. 100 million DKK annually. As part of the Basic data program The Danish Geodata Agency (Ministry of the Environment) gave free access to all topographic data, cadastral maps and Digital Elevation Model on Jan. 1st, 2013. The Danish Geodata Agency has decided to measure the effect of the open geographic data in the public sector (efficiency) and in the private sector (growth). The effect will be measured by using reference data (baseline analysis) from 2012. The reference data will cover statistics about who was using which dataset, for what, and what was the value of the use of data. This presentation briefly introduces the process behind open geographical data in Denmark, including a presentation of the positive business case. The presentation focuses on the research design used for measuring the effect of open geographical data in Denmark. Finally, the preliminary responses of open geographical data in the private and the public sector will be presented.

  9. Data Collection for Mental Health Studies Through Digital Platforms: Requirements and Design of a Prototype

    PubMed Central

    Triana Hoyos, Ana Maria; Alakörkkö, Tuomas; Kaski, Kimmo; Saramäki, Jari; Isometsä, Erkki; Darst, Richard K

    2017-01-01

    Background Mental and behavioral disorders are the main cause of disability worldwide. However, their diagnosis is challenging due to a lack of reliable biomarkers; current detection is based on structured clinical interviews which can be biased by the patient’s recall ability, affective state, changing in temporal frames, etc. While digital platforms have been introduced as a possible solution to this complex problem, there is little evidence on the extent of usability and usefulness of these platforms. Therefore, more studies where digital data is collected in larger scales are needed to collect scientific evidence on the capacities of these platforms. Most of the existing platforms for digital psychiatry studies are designed as monolithic systems for a certain type of study; publications from these studies focus on their results, rather than the design features of the data collection platform. Inevitably, more tools and platforms will emerge in the near future to fulfill the need for digital data collection for psychiatry. Currently little knowledge is available from existing digital platforms for future data collection platforms to build upon. Objective The objective of this work was to identify the most important features for designing a digital platform for data collection for mental health studies, and to demonstrate a prototype platform that we built based on these design features. Methods We worked closely in a multidisciplinary collaboration with psychiatrists, software developers, and data scientists and identified the key features which could guarantee short-term and long-term stability and usefulness of the platform from the designing stage to data collection and analysis of collected data. Results The key design features that we identified were flexibility of access control, flexibility of data sources, and first-order privacy protection. We also designed the prototype platform Non-Intrusive Individual Monitoring Architecture (Niima), where we implemented these key design features. We described why each of these features are important for digital data collection for psychiatry, gave examples of projects where Niima was used or is going to be used in the future, and demonstrated how incorporating these design principles opens new possibilities for studies. Conclusions The new methods of digital psychiatry are still immature and need further research. The design features we suggested are a first step to design platforms which can adapt to the upcoming requirements of digital psychiatry. PMID:28600276

  10. Open access: changing global science publishing.

    PubMed

    Gasparyan, Armen Yuri; Ayvazyan, Lilit; Kitas, George D

    2013-08-01

    The article reflects on open access as a strategy of changing the quality of science communication globally. Successful examples of open-access journals are presented to highlight implications of archiving in open digital repositories for the quality and citability of research output. Advantages and downsides of gold, green, and hybrid models of open access operating in diverse scientific environments are described. It is assumed that open access is a global trend which influences the workflow in scholarly journals, changing their quality, credibility, and indexability.

  11. The successes and challenges of open-source biopharmaceutical innovation.

    PubMed

    Allarakhia, Minna

    2014-05-01

    Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.

  12. VizieR Online Data Catalog: Herschel-PACS and -SPIRE spectroscopy of 70 objects (Green+, 2016)

    NASA Astrophysics Data System (ADS)

    Green, J. D.; Yang, Y.-L.; Evans, N. J., II; Karska, A.; Herczeg, G.; van Dishoeck, E. F.; Lee, J.-E.; Larson, R. L.; Bouwman, J.

    2016-10-01

    We present the CDF (COPS-DIGIT-FOOSH) archive, with Herschel spectroscopic observations of 70 objects (protostars, young stellar objects, and FU Orionis objects) from the "Dust, Ice, and Gas in Time" (DIGIT) Key Project, FU Orionis Objects Surveyed with Herschel" Open Time Program (FOOSH OT1), and "CO in Protostars" Open Time Program (COPS OT2) Herschel programs. These have been delivered to the Herschel archive and are available. The full source list is shown in Table1. The full DIGIT spectroscopic sample consists of 63 sources: 24 Herbig Ae/Be stars (intermediate mass sources with circumstellar disks), 9 T Tauri stars (low mass young stars with circumstellar disks), and 30 protostars (young stars with significant envelope emission) observed with Photodetector Array Camera and Spectrometer (PACS) spectroscopy. DIGIT also included an additional wTTS (weak-line T Tauri star) sample that was observed photometrically and delivered separately. The wTTS sample is fully described by Cieza et al. 2013ApJ...762..100C. The full DIGIT embedded protostellar sample consisted of 30 Class 0/I targets, drawn from previous studies, focusing on protostars with high-quality Spitzer-IRS 5-40μm spectroscopy (summarized by Lahuis et al. 2006 c2d Spectroscopy Explanatory Supplement; Pasadena, CA: Spitzer Science Center), and UV, optical, infrared, and submillimeter complementary data. These objects are selected from some of the nearest and best-studied molecular clouds: Taurus (140pc; 6 targets), Ophiuchus (125pc; 7 targets), Perseus (230-250pc; 7 targets), R Corona Australis (130pc; 3 targets), Serpens (429pc; 2 targets), Chamaeleon (178pc, 1 target), and 4 additional isolated cores. PACS is a 5*5 array of 9.4''*9.4'' spatial pixels (spaxels) covering the spectral range from 50 to 210μm with λ/Δλ~1000-3000, divided into four segments, covering λ~50-75, 70-105, 100-145, and 140-210μm. The PACS spatial resolution ranges from ~9'' at the shortest wavelengths (50μm) to ~18'' at the longest (210μm), corresponding to 1000-4500AU at the distances of most sources. The nominal pointing rms of the telescope is 2''. For the DIGIT embedded protostars sample we utilized the full range of PACS (50-210μm) in two linked, pointed, chop/nod rangescans: a blue scan covering 50-75 and 100-150μm (SED B2A+short R1); and a red scan covering 70-105 and 140-210μm (SED B2B+long R1). We used 6 and 4 range repetitions respectively, for integration times of 6853 and 9088s (a total of ~16000s per target for the entire 50-210μm scan). Excluding overhead, 50% of the integration time is spent on source and 50% on sky. Thus the effective on-source integration times are 3088 and 4180s, for the blue and red scans, respectively. The total on-source integration time to achieve the entire 50-210μm scan is then 7268s. Most (21 of 33) disk sources were observed with the same procedure as the embedded objects. The other 12 sources have only partial spectral coverage: 8 Herbig Ae/Be sources (HD35187, HD203024, HD245906, HD142666, HD144432, HD141569, HD98922, and HD150193) and 4 T Tauri sources (HT Lup, RU Lup, RY Lup, and RNO90) were observed using only the blue scans (i.e., achieving a wavelength coverage only from SED B2A+short R1, 100-150μm). 9 of these 12 sources (all except HD35187, HD203024, and HD245906) were observed in a further limited wavelength range (60-72+120-134μm; referred to as "forsterite only" scans for their focus on the 69μm forsterite dust feature). The FU Orionis Objects Surveyed with Herschel (FOOSH) program consisted of 21hrs of Herschel observing time: V1057Cyg, V1331Cyg, V1515Cyg, V1735Cyg, and FUOri were observed as part of FOOSH. For the FOOSH sample we again utilized the full range of PACS (50-210μm) in two linked, pointed, chop/nod rangescans: a blue scan covering 50-75 and 100-150μm (SED B2A+short R1); and a red scan covering 70-105 and 140-210μm (SED B2B+long R1). We used 6 and 4 range repetitions respectively, for integration times of 3530 and 4620s (a total of ~8000s per target and off-positions combined, for the entire 50-210μm scan; the on-source integration time is ~3000s). The telescope sky background was subtracted using two nod positions 6' from the source. The Spectral and Photometric Imaging REceiver (SPIRE; 194-670μm)/Fourier Transform Spectrometer (FTS) data were taken in a single pointing with sparse image sampling, high spectral resolution mode, over 1hr of integration time. The spectrum is divided into two orders covering the spectral ranges 194-325μm ("SSW"; Spectrograph Short Wavelengths) and 320-690μm ("SLW"; Spectrograph Long Wavelengths), with a resolution, Δv of 1.44GHz and resolving power, λ/Δλ~300-800, increasing at shorter wavelengths. The sample of 31 COPS (CO in ProtoStars) protostars observed with SPIRE-FTS includes 25 sources from the DIGIT and 6 from the WISH (Water in Star-forming regions with Herschel, PI: E. van Dischoek; van Dishoeck et al. 2011PASP..123..138V; see also Nisini et al. 2010A&A...518L.120N; Kristensen et al. 2012A&A...542A...8K; Karska et al. 2013A&A...552A.141K; Wampfler et al. 2013A&A...552A..56W) key programs. A nearly identical sample was observed in COJ=16->15 with HIFI (PI: L. Kristensen) and is presented in L. Kristensen et al. 2016, (in preparation). This data set (COPS: SPIRE-FTS) is analyzed in a forthcoming paper (J. Green et al. 2016, in preparation). The SPIRE beamsize ranges from 17'' to 40'', equivalent to physical sizes of ~2000-10000AU at the distances of the COPS sources. The COPS SPIRE-FTS data were observed identically to the FOOSH SPIRE data, in a single pointing with sparse image sampling, high spectral resolution, in 1hr of integration time per source, with one exception: the IRS 44/46 data were observed in medium image sampling (e.g., complete spatial coverage within the inner 2 rings of spaxels), in 1.5hr, in order to better distinguish IRS44 (the comparatively brighter IR source; Green et al. 2013ApJ...770..123G, J. Green et al. 2016, in preparation) from IRS46. (2 data files).

  13. Digital teleprotection units; A technology overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, D.; Madge, R.

    1992-10-01

    Over the past several years, there have been major technological advances in the area of fibre optics links and digital communication systems. This opens the possibility for digital teleprotection systems which are both faster and more reliable than current analogue ones. This paper presents a description of a generic Digital Teleprotection Unit (DTU) followed by a discussion on the technical characteristics of current commercial systems. A comparison is made between DTUs and their analogue counterparts in the area of transfer trip delay. Finally, a direct transfer trip system utilizing redundant DTUs is proposed.

  14. Telemetry Modernization with Open Architecture Software-Defined Radio Technology

    DTIC Science & Technology

    2016-01-01

    digital (A/D) con- vertors and separated into narrowband channels through digital down-conversion ( DDC ) techniques implemented in field-programmable...Lexington, MA 02420-9108 781-981-4204 Operations center Recording Filter FPGA DDC Filter Channel 1 Filter FPGA DDC Filter Channel n Wideband tuner A

  15. Six Strategies for Digital Learning Success. White Paper

    ERIC Educational Resources Information Center

    Mehta, Samir; Downs, Holly

    2016-01-01

    Technology has revolutionized corporate learning and leadership development. The number of organizations that use learning management systems is higher than ever before, and thanks to massive open online courses (MOOCs), small private online courses (SPOCS), microlearning, nanolearning, and other new media learning platforms, digital learning and…

  16. Voss and Wetherbee open the hatch to the ISS

    NASA Image and Video Library

    2001-03-10

    TS102-E-5089 (10 March 2001) --- Astronauts James D. Wetherbee (top) and James S. Voss, STS-102 commander and mission specialist, respectively, open hatch to the Space Station. The photograph was recorded with a digital still camera.

  17. The technique for 3D printing patient-specific models for auricular reconstruction.

    PubMed

    Flores, Roberto L; Liss, Hannah; Raffaelli, Samuel; Humayun, Aiza; Khouri, Kimberly S; Coelho, Paulo G; Witek, Lukasz

    2017-06-01

    Currently, surgeons approach autogenous microtia repair by creating a two-dimensional (2D) tracing of the unaffected ear to approximate a three-dimensional (3D) construct, a difficult process. To address these shortcomings, this study introduces the fabrication of patient-specific, sterilizable 3D printed auricular model for autogenous auricular reconstruction. A high-resolution 3D digital photograph was captured of the patient's unaffected ear and surrounding anatomic structures. The photographs were exported and uploaded into Amira, for transformation into a digital (.stl) model, which was imported into Blender, an open source software platform for digital modification of data. The unaffected auricle as digitally isolated and inverted to render a model for the contralateral side. The depths of the scapha, triangular fossa, and cymba were deepened to accentuate their contours. Extra relief was added to the helical root to further distinguish this structure. The ear was then digitally deconstructed and separated into its individual auricular components for reconstruction. The completed ear and its individual components were 3D printed using polylactic acid filament and sterilized following manufacturer specifications. The sterilized models were brought to the operating room to be utilized by the surgeon. The models allowed for more accurate anatomic measurements compared to 2D tracings, which reduced the degree of estimation required by surgeons. Approximately 20 g of the PLA filament were utilized for the construction of these models, yielding a total material cost of approximately $1. Using the methodology detailed in this report, as well as departmentally available resources (3D digital photography and 3D printing), a sterilizable, patient-specific, and inexpensive 3D auricular model was fabricated to be used intraoperatively. This technique of printing customized-to-patient models for surgeons to use as 'guides' shows great promise. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  18. Navigating Ethics in the Digital Age: Introducing Connected and Open Research Ethics (CORE), a Tool for Researchers and Institutional Review Boards

    PubMed Central

    Torous, John

    2017-01-01

    Research studies that leverage emerging technologies, such as passive sensing devices and mobile apps, have demonstrated encouraging potential with respect to favorably influencing the human condition. As a result, the nascent fields of mHealth and digital medicine have gained traction over the past decade as demonstrated in the United States by increased federal funding for research that cuts across a broad spectrum of health conditions. The existence of mHealth and digital medicine also introduced new ethical and regulatory challenges that both institutional review boards (IRBs) and researchers are struggling to navigate. In response, the Connected and Open Research Ethics (CORE) initiative was launched. The CORE initiative has employed a participatory research approach, whereby researchers and IRB affiliates are involved in identifying the priorities and functionality of a shared resource. The overarching goal of CORE is to develop dynamic and relevant ethical practices to guide mHealth and digital medicine research. In this Viewpoint paper, we describe the CORE initiative and call for readers to join the CORE Network and contribute to the bigger conversation on ethics in the digital age. PMID:28179216

  19. Open for Business

    ERIC Educational Resources Information Center

    Voyles, Bennett

    2007-01-01

    People know about the Sakai Project (open source course management system); they may even know about Kuali (open source financials). So, what is the next wave in open source software? This article discusses business intelligence (BI) systems. Though open source BI may still be only a rumor in most campus IT departments, some brave early adopters…

  20. Hydrographic Basins Analysis Using Digital Terrain Modelling

    NASA Astrophysics Data System (ADS)

    Mihaela, Pişleagă; -Minda Codruţa, Bădăluţă; Gabriel, Eleş; Daniela, Popescu

    2017-10-01

    The paper, emphasis the link between digital terrain modelling and studies of hydrographic basins, concerning the hydrological processes analysis. Given the evolution of computing techniques but also of the software digital terrain modelling made its presence felt increasingly, and established itself as a basic concept in many areas, due to many advantages. At present, most digital terrain modelling is derived from three alternative sources such as ground surveys, photogrammetric data capture or from digitized cartographic sources. A wide range of features may be extracted from digital terrain models, such as surface, specific points and landmarks, linear features but also areal futures like drainage basins, hills or hydrological basins. The paper highlights how the use appropriate software for the preparation of a digital terrain model, a model which is subsequently used to study hydrographic basins according to various geomorphological parameters. As a final goal, it shows the link between digital terrain modelling and hydrographic basins study that can be used to optimize the correlation between digital model terrain and hydrological processes in order to obtain results as close to the real field processes.

  1. Field-Programmable Gate Array-based fluxgate magnetometer with digital integration

    NASA Astrophysics Data System (ADS)

    Butta, Mattia; Janosek, Michal; Ripka, Pavel

    2010-05-01

    In this paper, a digital magnetometer based on printed circuit board fluxgate is presented. The fluxgate is pulse excited and the signal is extracted by gate integration. We investigate the possibility to perform integration on very narrow gates (typically 500 ns) by using digital techniques. The magnetometer is based on field-programmable gate array (FPGA) card: we will show all the advantages and disadvantages, given by digitalization of fluxgate output voltage by means of analog-to-digital converter on FPGA card, as well as digitalization performed by external digitizer. Due to very narrow gate, it is shown that a magnetometer entirely based on a FPGA card is preferable, because it avoids noise due to trigger instability. Both open loop and feedback operative mode are described and achieved results are presented.

  2. Development and testing of methodology for evaluating the performance of multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Polotzky, Anthony S.; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1990-01-01

    The development of a controller performance evaluation (CPE) methodology for multiinput/multioutput digital control systems is described. The equations used to obtain the open-loop plant, controller transfer matrices, and return-difference matrices are given. Results of applying the CPE methodology to evaluate MIMO digital flutter suppression systems being tested on an active flexible wing wind-tunnel model are presented to demonstrate the CPE capability.

  3. Real-time classification and sensor fusion with a spiking deep belief network

    PubMed Central

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. PMID:24115919

  4. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  5. Issues and Challenges in Open and Distance e-Learning: Perspectives from the Philippines

    ERIC Educational Resources Information Center

    Arinto, Patricia Brazil

    2016-01-01

    Rapid advances in information and communications technology in the digital age have brought about significant changes in the practice of distance education (DE) worldwide. DE practitioners in the Philippines' open university have coined the term "open and distance e-learning" (ODeL) to refer to the new forms of DE, which are…

  6. Open Code - Open Content - Open Law. Building a Digital Commons

    DTIC Science & Technology

    1999-06-21

    keep porn away from kids . And while I’m all for defeating COPA or the CDA, or whatever “C” word they come up with the next time around, I am...completely baffled about the priorities. Sure, civil liberties will be compromised if COPA stands; sure, cyberspace will be different if porn is not available

  7. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  8. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes

    PubMed Central

    Koulaouzidis, Anastasios; Iakovidis, Dimitris K.; Yung, Diana E.; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N.; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio

    2017-01-01

    Background and aims  Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Methods  Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. Results  The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). Conclusion  MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential. PMID:28580415

  9. Global hierarchical classification of deepwater and wetland environments from remote sensing products

    NASA Astrophysics Data System (ADS)

    Fluet-Chouinard, E.; Lehner, B.; Aires, F.; Prigent, C.; McIntyre, P. B.

    2017-12-01

    Global surface water maps have improved in spatial and temporal resolutions through various remote sensing methods: open water extents with compiled Landsat archives and inundation with topographically downscaled multi-sensor retrievals. These time-series capture variations through time of open water and inundation without discriminating between hydrographic features (e.g. lakes, reservoirs, river channels and wetland types) as other databases have done as static representation. Available data sources present the opportunity to generate a comprehensive map and typology of aquatic environments (deepwater and wetlands) that improves on earlier digitized inventories and maps. The challenge of classifying surface waters globally is to distinguishing wetland types with meaningful characteristics or proxies (hydrology, water chemistry, soils, vegetation) while accommodating limitations of remote sensing data. We present a new wetland classification scheme designed for global application and produce a map of aquatic ecosystem types globally using state-of-the-art remote sensing products. Our classification scheme combines open water extent and expands it with downscaled multi-sensor inundation data to capture the maximal vegetated wetland extent. The hierarchical structure of the classification is modified from the Cowardin Systems (1979) developed for the USA. The first level classification is based on a combination of landscape positions and water source (e.g. lacustrine, riverine, palustrine, coastal and artificial) while the second level represents the hydrologic regime (e.g. perennial, seasonal, intermittent and waterlogged). Class-specific descriptors can further detail the wetland types with soils and vegetation cover. Our globally consistent nomenclature and top-down mapping allows for direct comparison across biogeographic regions, to upscale biogeochemical fluxes as well as other landscape level functions.

  10. An Intrinsically Digital Amplification Scheme for Hearing Aids

    NASA Astrophysics Data System (ADS)

    Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.

    2005-12-01

    Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.

  11. Digital processing of RF signals from optical frequency combs

    NASA Astrophysics Data System (ADS)

    Cizek, Martin; Smid, Radek; Buchta, Zdeněk.; Mikel, Břetislav; Lazar, Josef; Cip, Ondřej

    2013-01-01

    The presented work is focused on digital processing of beat note signals from a femtosecond optical frequency comb. The levels of mixing products of single spectral components of the comb with CW laser sources are usually very low compared to products of mixing all the comb components together. RF counters are more likely to measure the frequency of the strongest spectral component rather than a weak beat note. Proposed experimental digital signal processing system solves this problem by analyzing the whole spectrum of the output RF signal and using software defined radio (SDR) algorithms. Our efforts concentrate in two main areas: Firstly, using digital servo-loop techniques for locking free running continuous laser sources on single components of the fs comb spectrum. Secondly, we are experimenting with digital signal processing of the RF beat note spectrum produced by f-2f 1 technique used for assessing the offset and repetition frequencies of the comb, resulting in digital servo-loop stabilization of the fs comb. Software capable of computing and analyzing the beat-note RF spectrums using FFT and peak detection was developed. A SDR algorithm performing phase demodulation on the f- 2f signal is used as a regulation error signal source for a digital phase-locked loop stabilizing the offset frequency of the fs comb.

  12. Digital processing of signals from femtosecond combs

    NASA Astrophysics Data System (ADS)

    Čížek, Martin; Šmíd, Radek; Buchta, Zdeněk.; Mikel, Břetislav; Lazar, Josef; Číp, Ondrej

    2012-01-01

    The presented work is focused on digital processing of beat note signals from a femtosecond optical frequency comb. The levels of mixing products of single spectral components of the comb with CW laser sources are usually very low compared to products of mixing all the comb components together. RF counters are more likely to measure the frequency of the strongest spectral component rather than a weak beat note. Proposed experimental digital signal processing system solves this problem by analyzing the whole spectrum of the output RF signal and using software defined radio (SDR) algorithms. Our efforts concentrate in two main areas: Firstly, we are experimenting with digital signal processing of the RF beat note spectrum produced by f-2f 1 technique and with fully digital servo-loop stabilization of the fs comb. Secondly, we are using digital servo-loop techniques for locking free running continuous laser sources on single components of the fs comb spectrum. Software capable of computing and analyzing the beat-note RF spectrums using FFT and peak detection was developed. A SDR algorithm performing phase demodulation on the f- 2f signal is used as a regulation error signal source for a digital phase-locked loop stabilizing the offset and repetition frequencies of the fs comb.

  13. Dietary Digital Diaries: Documenting Adolescents' Obesogenic Environment

    ERIC Educational Resources Information Center

    Staiano, Amanda E.; Baker, Christina M.; Calvert, Sandra L.

    2012-01-01

    Obesogenic environments promote excessive caloric and fat intake. A total of 23 low-income, African American adolescents digitally photographed their lunchtime food environment at a school buffet during summer camp. Depicted food was coded for nutritional content on the platescape (own plate or others' plates) and the tablescape (open buffet).…

  14. Transaction Circles with Digital Texts as a Foundation for Democratic Practices

    ERIC Educational Resources Information Center

    Brown, Sally

    2015-01-01

    Transaction circles weave together elements of guided reading and literature circles in an open conversational structure that supports students as agentive learners. Discourse within these circles utilizing digital informational texts assist in the development of democratic practices even in a time when federal mandates limit curricula and…

  15. Digital Divide in Post-Primary Schools

    ERIC Educational Resources Information Center

    Marcus-Quinn, Ann; McGarr, Oliver

    2013-01-01

    This research study developed curricular specific open educational resources (OERs) for the teaching of poetry at Junior Certificate level in Irish post-primary schools. It aimed to capture the collaborative design and development process used in the development of the digital resources and describe and evaluate the implementation of the resources…

  16. Montessori Practices: Options for a Digital Age

    ERIC Educational Resources Information Center

    Powell, Mark

    2016-01-01

    Mark Powell's plea for an open-minded view on the full scope of technology that is compatible with Montessori education enriches Maria Montessori's clear modernism of welcoming science into her educational vision. Growing up digital can be intelligently managed so that "technology may offer an effective, adaptable, and easily available means…

  17. Fostering Historical Thinking with Digitized Primary Sources

    ERIC Educational Resources Information Center

    Tally, Bill; Goldenberg, Lauren B.

    2005-01-01

    This pilot study examined middle school and high school student performance on an online historical thinking assessment task. After their teachers received training in the use of digital historical archives, students from all groups engaged in historical thinking behaviors (e.g., observation, sourcing, inferencing, evidence, question-posing, and…

  18. The taxonomic name resolution service: an online tool for automated standardization of plant names

    PubMed Central

    2013-01-01

    Background The digitization of biodiversity data is leading to the widespread application of taxon names that are superfluous, ambiguous or incorrect, resulting in mismatched records and inflated species numbers. The ultimate consequences of misspelled names and bad taxonomy are erroneous scientific conclusions and faulty policy decisions. The lack of tools for correcting this ‘names problem’ has become a fundamental obstacle to integrating disparate data sources and advancing the progress of biodiversity science. Results The TNRS, or Taxonomic Name Resolution Service, is an online application for automated and user-supervised standardization of plant scientific names. The TNRS builds upon and extends existing open-source applications for name parsing and fuzzy matching. Names are standardized against multiple reference taxonomies, including the Missouri Botanical Garden's Tropicos database. Capable of processing thousands of names in a single operation, the TNRS parses and corrects misspelled names and authorities, standardizes variant spellings, and converts nomenclatural synonyms to accepted names. Family names can be included to increase match accuracy and resolve many types of homonyms. Partial matching of higher taxa combined with extraction of annotations, accession numbers and morphospecies allows the TNRS to standardize taxonomy across a broad range of active and legacy datasets. Conclusions We show how the TNRS can resolve many forms of taxonomic semantic heterogeneity, correct spelling errors and eliminate spurious names. As a result, the TNRS can aid the integration of disparate biological datasets. Although the TNRS was developed to aid in standardizing plant names, its underlying algorithms and design can be extended to all organisms and nomenclatural codes. The TNRS is accessible via a web interface at http://tnrs.iplantcollaborative.org/ and as a RESTful web service and application programming interface. Source code is available at https://github.com/iPlantCollaborativeOpenSource/TNRS/. PMID:23324024

  19. Digital Forensics Using Local Signal Statistics

    ERIC Educational Resources Information Center

    Pan, Xunyu

    2011-01-01

    With the rapid growth of the Internet and the popularity of digital imaging devices, digital imagery has become our major information source. Meanwhile, the development of digital manipulation techniques employed by most image editing software brings new challenges to the credibility of photographic images as the definite records of events. We…

  20. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.

Top