The successes and challenges of open-source biopharmaceutical innovation.
Allarakhia, Minna
2014-05-01
Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.
Getting Open Source Software into Schools: Strategies and Challenges
ERIC Educational Resources Information Center
Hepburn, Gary; Buley, Jan
2006-01-01
In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…
Noninvasive Fetal ECG: the PhysioNet/Computing in Cardiology Challenge 2013.
Silva, Ikaro; Behar, Joachim; Sameni, Reza; Zhu, Tingting; Oster, Julien; Clifford, Gari D; Moody, George B
2013-03-01
The PhysioNet/CinC 2013 Challenge aimed to stimulate rapid development and improvement of software for estimating fetal heart rate (FHR), fetal interbeat intervals (FRR), and fetal QT intervals (FQT), from multichannel recordings made using electrodes placed on the mother's abdomen. For the challenge, five data collections from a variety of sources were used to compile a large standardized database, which was divided into training, open test, and hidden test subsets. Gold-standard fetal QRS and QT interval annotations were developed using a novel crowd-sourcing framework. The challenge organizers used the hidden test subset to evaluate 91 open-source software entries submitted by 53 international teams of participants in three challenge events, estimating FHR, FRR, and FQT using the hidden test subset, which was not available for study by participants. Two additional events required only user-submitted QRS annotations to evaluate FHR and FRR estimation accuracy using the open test subset available to participants. The challenge yielded a total of 91 open-source software entries. The best of these achieved average estimation errors of 187bpm 2 for FHR, 20.9 ms for FRR, and 152.7 ms for FQT. The open data sets, scoring software, and open-source entries are available at PhysioNet for researchers interested on working on these problems.
Developing open-source codes for electromagnetic geophysics using industry support
NASA Astrophysics Data System (ADS)
Key, K.
2017-12-01
Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.
Crowd Sourcing for Challenging Technical Problems and Business Model
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; Richard, Elizabeth
2011-01-01
Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone imaging, microbial detection and even the use of pharmaceuticals for radiation protection. The internal challenges through NASA@Work drew over 6000 participants across all NASA centers. Challenges conducted by each NASA center elicited ideas and solutions from several other NASA centers and demonstrated rapid and efficient participation from employees at multiple centers to contribute to problem solving. Finally, on January 19, 2011, the SLSD conducted a workshop on open collaboration and innovation strategies and best practices through the newly established NASA Human Health and Performance Center (NHHPC). Initial projects will be described leading to a new business model for SLSD.
Understanding How the "Open" of Open Source Software (OSS) Will Improve Global Health Security.
Hahn, Erin; Blazes, David; Lewis, Sheri
2016-01-01
Improving global health security will require bold action in all corners of the world, particularly in developing settings, where poverty often contributes to an increase in emerging infectious diseases. In order to mitigate the impact of emerging pandemic threats, enhanced disease surveillance is needed to improve early detection and rapid response to outbreaks. However, the technology to facilitate this surveillance is often unattainable because of high costs, software and hardware maintenance needs, limited technical competence among public health officials, and internet connectivity challenges experienced in the field. One potential solution is to leverage open source software, a concept that is unfortunately often misunderstood. This article describes the principles and characteristics of open source software and how it may be applied to solve global health security challenges.
The paper examines the quality assurance challenges associated with open path Fourier transform infrared (OPFTIR) measurements of large area pollution sources with plume reconstruction by computed tomography (CT) and how each challenge may be met. Traditionally, pollutant concent...
Scaling Agile Infrastructure to People
NASA Astrophysics Data System (ADS)
Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.
2015-12-01
When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.
Open Data, Open Source and Open Standards in chemistry: The Blue Obelisk five years on
2011-01-01
Background The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data, Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistry research by promoting interoperability between chemistry software, encouraging cooperation between Open Source developers, and developing community resources and Open Standards. Results This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveys progress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry. Conclusions We show that the Blue Obelisk has been very successful in bringing together researchers and developers with common interests in ODOSOS, leading to development of many useful resources freely available to the chemistry community. PMID:21999342
ERIC Educational Resources Information Center
Lakhan, Shaheen E.; Jhunjhunwala, Kavita
2008-01-01
Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…
Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle
NASA Astrophysics Data System (ADS)
Vinay, S.; Downs, R. R.
2012-12-01
Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.
Characterizing highly dynamic, transient, and vertically lofted emissions from open area sources poses unique measurement challenges. This study developed and applied a multipollutant sensor and integrated sampler system for use on mobile applications including tethered balloons ...
OpenSim: open-source software to create and analyze dynamic simulations of movement.
Delp, Scott L; Anderson, Frank C; Arnold, Allison S; Loan, Peter; Habib, Ayman; John, Chand T; Guendelman, Eran; Thelen, Darryl G
2007-11-01
Dynamic simulations of movement allow one to study neuromuscular coordination, analyze athletic performance, and estimate internal loading of the musculoskeletal system. Simulations can also be used to identify the sources of pathological movement and establish a scientific basis for treatment planning. We have developed a freely available, open-source software system (OpenSim) that lets users develop models of musculoskeletal structures and create dynamic simulations of a wide variety of movements. We are using this system to simulate the dynamics of individuals with pathological gait and to explore the biomechanical effects of treatments. OpenSim provides a platform on which the biomechanics community can build a library of simulations that can be exchanged, tested, analyzed, and improved through a multi-institutional collaboration. Developing software that enables a concerted effort from many investigators poses technical and sociological challenges. Meeting those challenges will accelerate the discovery of principles that govern movement control and improve treatments for individuals with movement pathologies.
Tools for open geospatial science
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Mitasova, H.
2017-12-01
Open science uses open source to deal with reproducibility challenges in data and computational sciences. However, just using open source software or making the code public does not make the research reproducible. Moreover, the scientists face the challenge of learning new unfamiliar tools and workflows. In this contribution, we will look at a graduate-level course syllabus covering several software tools which make validation and reuse by a wider professional community possible. For the novices in the open science arena, we will look at how scripting languages such as Python and Bash help us reproduce research (starting with our own work). Jupyter Notebook will be introduced as a code editor, data exploration tool, and a lab notebook. We will see how Git helps us not to get lost in revisions and how Docker is used to wrap all the parts together using a single text file so that figures for a scientific paper or a technical report can be generated with a single command. We will look at examples of software and publications in the geospatial domain which use these tools and principles. Scientific contributions to GRASS GIS, a powerful open source desktop GIS and geoprocessing backend, will serve as an example of why and how to publish new algorithms and tools as part of a bigger open source project.
Building CHAOS: An Operating System for Livermore Linux Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garlick, J E; Dunlap, C M
2003-02-21
The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less
Online Textbooks Deliver Timely, Real-World Content
ERIC Educational Resources Information Center
Seidel, Kim
2009-01-01
Faced with the challenge of keeping up with the rapidly changing field of information systems, author and teacher John Gallaugher opted to write an open source textbook with a new online company, Flat World Knowledge (FWK). Gallaugher's open source textbook, "Information Systems: A Manager's Guide to Harnessing Technology", has an expected…
High-performance semiconductor quantum-dot single-photon sources
NASA Astrophysics Data System (ADS)
Senellart, Pascale; Solomon, Glenn; White, Andrew
2017-11-01
Single photons are a fundamental element of most quantum optical technologies. The ideal single-photon source is an on-demand, deterministic, single-photon source delivering light pulses in a well-defined polarization and spatiotemporal mode, and containing exactly one photon. In addition, for many applications, there is a quantum advantage if the single photons are indistinguishable in all their degrees of freedom. Single-photon sources based on parametric down-conversion are currently used, and while excellent in many ways, scaling to large quantum optical systems remains challenging. In 2000, semiconductor quantum dots were shown to emit single photons, opening a path towards integrated single-photon sources. Here, we review the progress achieved in the past few years, and discuss remaining challenges. The latest quantum dot-based single-photon sources are edging closer to the ideal single-photon source, and have opened new possibilities for quantum technologies.
Entering the Matrix: The Challenge of Regulating Radical Leveling Technologies
2015-12-01
Source Horizon: 3D Printing,” LinuxInsider, May 28, 2014, http://www.linuxinsider.com/story/ 80519.html; Nozomi Hayase, “ Blockchain Revolution: Open...Source Democracy for the 99%,” openDemocracy UK, August 4, 2014, https://www.opendemocracy.net/ourkingdom/nozomi-hayase/ blockchain -revolution-open...8,000. The funds raised on Kickstarter: $500,000.75 A private technology watchdog, the ETC Group, notified the US Department of Agriculture about
Technology collaboration by means of an open source government
NASA Astrophysics Data System (ADS)
Berardi, Steven M.
2009-05-01
The idea of open source software originally began in the early 1980s, but it never gained widespread support until recently, largely due to the explosive growth of the Internet. Only the Internet has made this kind of concept possible, bringing together millions of software developers from around the world to pool their knowledge. The tremendous success of open source software has prompted many corporations to adopt the culture of open source and thus share information they previously held secret. The government, and specifically the Department of Defense (DoD), could also benefit from adopting an open source culture. In acquiring satellite systems, the DoD often builds walls between program offices, but installing doors between programs can promote collaboration and information sharing. This paper addresses the challenges and consequences of adopting an open source culture to facilitate technology collaboration for DoD space acquisitions. DISCLAIMER: The views presented here are the views of the author, and do not represent the views of the United States Government, United States Air Force, or the Missile Defense Agency.
Managing multicentre clinical trials with open source.
Raptis, Dimitri Aristotle; Mettler, Tobias; Fischer, Michael Alexander; Patak, Michael; Lesurtel, Mickael; Eshmuminov, Dilmurodjon; de Rougemont, Olivier; Graf, Rolf; Clavien, Pierre-Alain; Breitenstein, Stefan
2014-03-01
Multicentre clinical trials are challenged by high administrative burden, data management pitfalls and costs. This leads to a reduced enthusiasm and commitment of the physicians involved and thus to a reluctance in conducting multicentre clinical trials. The purpose of this study was to develop a web-based open source platform to support a multi-centre clinical trial. We developed on Drupal, an open source software distributed under the terms of the General Public License, a web-based, multi-centre clinical trial management system with the design science research approach. This system was evaluated by user-testing and well supported several completed and on-going clinical trials and is available for free download. Open source clinical trial management systems are capable in supporting multi-centre clinical trials by enhancing efficiency, quality of data management and collaboration.
ERIC Educational Resources Information Center
Roman, Harry T.
2011-01-01
Street intersections are a source of accidents--for both automobiles and pedestrians. This article presents an intersection challenge that allows students to explore some possible ways to change the traditional intersection. In this challenge, teachers open up the boundaries and allow students to redesign their world. The first step is to help…
ERIC Educational Resources Information Center
Thankachan, Briju; Moore, David Richard
2017-01-01
The use of Free and Open Source Software (FOSS), a subset of Information and Communication Technology (ICT), can reduce the cost of purchasing software. Despite the benefit in the initial purchase price of software, deploying software requires total cost that goes beyond the initial purchase price. Total cost is a silent issue of FOSS and can only…
Open Listening: Creative Evolution in Early Childhood Settings
ERIC Educational Resources Information Center
Davies, Bronwyn
2011-01-01
This article sketches out a philosophy and practice of open listening, linking open listening to Bergson's (1998) concept of creative evolution. I draw on examples of small children at play from a variety of sources, including Reggio-Emilia-inspired preschools in Sweden. The article offers a challenge to early childhood educators to listen and to…
Challenges of the Open Source Component Marketplace in the Industry
NASA Astrophysics Data System (ADS)
Ayala, Claudia; Hauge, Øyvind; Conradi, Reidar; Franch, Xavier; Li, Jingyue; Velle, Ketil Sandanger
The reuse of Open Source Software components available on the Internet is playing a major role in the development of Component Based Software Systems. Nevertheless, the special nature of the OSS marketplace has taken the “classical” concept of software reuse based on centralized repositories to a completely different arena based on massive reuse over Internet. In this paper we provide an overview of the actual state of the OSS marketplace, and report preliminary findings about how companies interact with this marketplace to reuse OSS components. Such data was gathered from interviews in software companies in Spain and Norway. Based on these results we identify some challenges aimed to improve the industrial reuse of OSS components.
pyLIMA : The first open source microlensing modeling software
NASA Astrophysics Data System (ADS)
Bachelet, Etienne; Street, Rachel; Bozza, Valerio
2018-01-01
Microlensing is highly sensitive to planets beyond the snowline and distributed along the line of sight towards the Galactic Bulge. The WFIRST-AFTA mission should detect about 3000 of these planets and significantly improves our knowledge of planet formation and statistics, complementing results found by transit and radial velocity methods. However, the modeling of microlensing event is challenging on different aspects leading to a highly time consuming analysis. After a quick summarize of these different challenges, I will present pyLIMA, the first open source microlensing modeling software. The aimed goal of this software are to be flexible, powerful and user friendly. This presentation will focus on various case and early results.
Data Mining Meets HCI: Making Sense of Large Graphs
2012-07-01
graph algo- rithms, won the Open Source Software World Challenge, Silver Award. We have released Pegasus as free , open-source software, downloaded by...METIS [77], spectral clustering [108], and the parameter- free “Cross-associations” (CA) [26]. Belief Propagation can also be used for clus- tering, as...number of tools have been developed to support “ landscape ” views of information. These include WebBook and Web- Forager [23], which use a book metaphor
Open source tools for large-scale neuroscience.
Freeman, Jeremy
2015-06-01
New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
GODAN Local Farming Challenge 2017 - Encourage Geo-Innovation Solutions for Zero Hunger
NASA Astrophysics Data System (ADS)
Anand, Suchith; Hogan, Patrick; Brovelli, Maria; Schaap, Ben; Musker, Ruthie; Laperrière, André
2017-04-01
The initial ideas for Open Geospatial Science [1] were presented nearly a decade ago. They build upon the proposition of Open science which argues that scientific knowledge develops more rapidly and productively if openly shared (as early as is practical in the discovery process). The key ingredients that make Open Geospatial Science possible are enshrined in Open Principles, i.e.: open source geospatial software, open data, open standards, open educational resources, and open access to research publications. OpenCitySmart[2] is an initiative of Geo for All [3] that aims to develop a suite of tools for city-related infrastructure management (utilities, traffic, services, etc.). Its purpose will be to continually refine and add functionality that not only streamlines operational efficiency but also considers the need for sustainability and quality of urban life. OpenCitySmart employs Open solutions to build richer tools that empower organisations and individuals to utilizespatial and non-spatial data alike. This will create opportunities for innovation both globally and locally. As the population of cities grow, the concern of food security will shift from rural to urban areas. Currently, nearly 800 million people struggle with debilitating hunger and malnutrition and can be found in every corner of the globe. That's one in every nine people, with the majority being women and children. The Global Open Data for Agriculture and Nutrition (GODAN) [4] supports the proactive sharing of open data to make information about agriculture and nutrition available, accessible and usable to deal with the urgent challenge of ensuring world food security. A core principle behind GODAN is that a solution to Zero Hunger lies within existing, but often unavailable, agriculture and nutrition data. Through an online survey, GODAN found that the most needed data type across its 430+ partner network was geospatial data. Through the GODAN Europa Challenge we want to bring together researchers and students to work collaboratively on innovative ideas to create change using agriculture and nutrition data. The Europa Challenge is a World Challenge, though we use the wisdom of Europe's INSPIRE Directive to guide project development. The Europa Challenge is asking the world's *best and brightest* to deliver solutions serving city needs. With support from the NASA Europa Challenge[5], GODAN is launching a Local Farming Challenge. We welcome students to create innovative ideas that will help tackle the solutions of local farming in growing cities, using some aspect of the OpenCitySmart Design and uses NASA's open source virtual globe technology, WebWorldWind. Ideas may include ways for optimally linking local farming communities directly with potential customers, tools for visualising spatio-temporal aspects of local farming, tools for helping reducing wastage (for example linking with local food banks), and any number of solutions for helping our goal of Zero Hunger. 1. http://www.mdpi.com/journal/ijgi/special_issues/science-applications 2. http://www.geoforall.org/ 3. https://wiki.osgeo.org/wiki/Opencitysmart 4. http://www.godan.info 5. http://eurochallenge.como.polimi.it
NASA Astrophysics Data System (ADS)
Kearns, E. J.
2017-12-01
NOAA's Big Data Project is conducting an experiment in the collaborative distribution of open government data to non-governmental cloud-based systems. Through Cooperative Research and Development Agreements signed in 2015 between NOAA and Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium, NOAA is distributing open government data to a wide community of potential users. There are a number of significant advantages related to the use of open data on commercial cloud platforms, but through this experiment NOAA is also discovering significant challenges for those stewarding and maintaining NOAA's data resources in support of users in the wider open data ecosystem. Among the challenges that will be discussed are: the need to provide effective interpretation of the data content to enable their use by data scientists from other expert communities; effective maintenance of Collaborators' open data stores through coordinated publication of new data and new versions of older data; the provenance and verification of open data as authentic NOAA-sourced data across multiple management boundaries and analytical tools; and keeping pace with the accelerating expectations of users with regard to improved quality control, data latency, availability, and discoverability. Suggested strategies to address these challenges will also be described.
Numerical Analysis of the Cavity Flow subjected to Passive Controls Techniques
NASA Astrophysics Data System (ADS)
Melih Guleren, Kursad; Turk, Seyfettin; Mirza Demircan, Osman; Demir, Oguzhan
2018-03-01
Open-source flow solvers are getting more and more popular for the analysis of challenging flow problems in aeronautical and mechanical engineering applications. They are offered under the GNU General Public License and can be run, examined, shared and modified according to user’s requirements. SU2 and OpenFOAM are the two most popular open-source solvers in Computational Fluid Dynamics (CFD) community. In the present study, some passive control methods on the high-speed cavity flows are numerically simulated using these open-source flow solvers along with one commercial flow solver called ANSYS/Fluent. The results are compared with the available experimental data. The solver SU2 are seen to predict satisfactory the mean streamline velocity but not turbulent kinetic energy and overall averaged sound pressure level (OASPL). Whereas OpenFOAM predicts all these parameters nearly as the same levels of ANSYS/Fluent.
M2Lite: An Open-source, Light-weight, Pluggable and Fast Proteome Discoverer MSF to mzIdentML Tool.
Aiyetan, Paul; Zhang, Bai; Chen, Lily; Zhang, Zhen; Zhang, Hui
2014-04-28
Proteome Discoverer is one of many tools used for protein database search and peptide to spectrum assignment in mass spectrometry-based proteomics. However, the inadequacy of conversion tools makes it challenging to compare and integrate its results to those of other analytical tools. Here we present M2Lite, an open-source, light-weight, easily pluggable and fast conversion tool. M2Lite converts proteome discoverer derived MSF files to the proteomics community defined standard - the mzIdentML file format. M2Lite's source code is available as open-source at https://bitbucket.org/paiyetan/m2lite/src and its compiled binaries and documentation can be freely downloaded at https://bitbucket.org/paiyetan/m2lite/downloads.
Embracing Open Source for NASA's Earth Science Data Systems
NASA Technical Reports Server (NTRS)
Baynes, Katie; Pilone, Dan; Boller, Ryan; Meyer, David; Murphy, Kevin
2017-01-01
The overarching purpose of NASAs Earth Science program is to develop a scientific understanding of Earth as a system. Scientific knowledge is most robust and actionable when resulting from transparent, traceable, and reproducible methods. Reproducibility includes open access to the data as well as the software used to arrive at results. Additionally, software that is custom-developed for NASA should be open to the greatest degree possible, to enable re-use across Federal agencies, reduce overall costs to the government, remove barriers to innovation, and promote consistency through the use of uniform standards. Finally, Open Source Software (OSS) practices facilitate collaboration between agencies and the private sector. To best meet these ends, NASAs Earth Science Division promotes the full and open sharing of not only all data, metadata, products, information, documentation, models, images, and research results but also the source code used to generate, manipulate and analyze them. This talk focuses on the challenges to open sourcing NASA developed software within ESD and the growing pains associated with establishing policies running the gamut of tracking issues, properly documenting build processes, engaging the open source community, maintaining internal compliance, and accepting contributions from external sources. This talk also covers the adoption of existing open source technologies and standards to enhance our custom solutions and our contributions back to the community. Finally, we will be introducing the most recent OSS contributions from NASA Earth Science program and promoting these projects for wider community review and adoption.
The Privacy and Security Implications of Open Data in Healthcare.
Kobayashi, Shinji; Kane, Thomas B; Paton, Chris
2018-04-22
The International Medical Informatics Association (IMIA) Open Source Working Group (OSWG) initiated a group discussion to discuss current privacy and security issues in the open data movement in the healthcare domain from the perspective of the OSWG membership. Working group members independently reviewed the recent academic and grey literature and sampled a number of current large-scale open data projects to inform the working group discussion. This paper presents an overview of open data repositories and a series of short case reports to highlight relevant issues present in the recent literature concerning the adoption of open approaches to sharing healthcare datasets. Important themes that emerged included data standardisation, the inter-connected nature of the open source and open data movements, and how publishing open data can impact on the ethics, security, and privacy of informatics projects. The open data and open source movements in healthcare share many common philosophies and approaches including developing international collaborations across multiple organisations and domains of expertise. Both movements aim to reduce the costs of advancing scientific research and improving healthcare provision for people around the world by adopting open intellectual property licence agreements and codes of practice. Implications of the increased adoption of open data in healthcare include the need to balance the security and privacy challenges of opening data sources with the potential benefits of open data for improving research and healthcare delivery. Georg Thieme Verlag KG Stuttgart.
Common Approach to Geoprocessing of Uav Data across Application Domains
NASA Astrophysics Data System (ADS)
Percivall, G. S.; Reichardt, M.; Taylor, T.
2015-08-01
UAVs are a disruptive technology bringing new geographic data and information to many application domains. UASs are similar to other geographic imagery systems so existing frameworks are applicable. But the diversity of UAVs as platforms along with the diversity of available sensors are presenting challenges in the processing and creation of geospatial products. Efficient processing and dissemination of the data is achieved using software and systems that implement open standards. The challenges identified point to the need for use of existing standards and extending standards. Results from the use of the OGC Sensor Web Enablement set of standards are presented. Next steps in the progress of UAVs and UASs may follow the path of open data, open source and open standards.
ERIC Educational Resources Information Center
Roman, Harry T.
2011-01-01
It is important to let students see the value of mathematics in design--and how mathematics lends perspective to problem solving. In this article, the author describes a water-service challenge which enables students to design a water utility system that uses surface runoff into an open reservoir as the potable water source. This challenge…
Building an Open Source Framework for Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Jagers, B.; Meijers, E.; Villars, M.
2015-12-01
In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.
Ammonia losses and nitrogen partitioning at the southern High Plains open lot dairy
USDA-ARS?s Scientific Manuscript database
Animal agriculture is a significant source of ammonia (NH3). Cattle excrete most ingested nitrogen (N); most urinary N is converted to NH3, volatilized and lost to the atmosphere. Open lot dairies on the southern High Plains are a growing industry and face environmental challenges as well as reporti...
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
2014-09-01
college student alongside you, little sis! To Jes- xix sika Miller, Lauren Garcia and Caity White , my closest friends and confidants of ten years, who...arena corresponding coverage to the GUI is outlined in white 2.1.3 Challenges in the Model There are inherent challenges with any model that implements...source middleware originally maintained by Willow Garage [36] and now managed by the Open Source Robotics Foundation [37]. It provides a framework for
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
An Open-Source Standard T-Wave Alternans Detector for Benchmarking.
Khaustov, A; Nemati, S; Clifford, Gd
2008-09-14
We describe an open source algorithm suite for T-Wave Alternans (TWA) detection and quantification. The software consists of Matlab implementations of the widely used Spectral Method and Modified Moving Average with libraries to read both WFDB and ASCII data under windows and Linux. The software suite can run in both batch mode and with a provided graphical user interface to aid waveform exploration. Our software suite was calibrated using an open source TWA model, described in a partner paper [1] by Clifford and Sameni. For the PhysioNet/CinC Challenge 2008 we obtained a score of 0.881 for the Spectral Method and 0.400 for the MMA method. However, our objective was not to provide the best TWA detector, but rather a basis for detailed discussion of algorithms.
Early Alert of Academically At-Risk Students: An Open Source Analytics Initiative
ERIC Educational Resources Information Center
Jayaprakash, Sandeep M.; Moody, Erik W.; Lauría, Eitel J. M.; Regan, James R.; Baron, Joshua D.
2014-01-01
The Open Academic Analytics Initiative (OAAI) is a collaborative, multi-year grant program aimed at researching issues related to the scaling up of learning analytics technologies and solutions across all of higher education. The paper describes the goals and objectives of the OAAI, depicts the process and challenges of collecting, organizing and…
Turning a remotely controllable observatory into a fully autonomous system
NASA Astrophysics Data System (ADS)
Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael
2014-08-01
We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.
Review on open source operating systems for internet of things
NASA Astrophysics Data System (ADS)
Wang, Zhengmin; Li, Wei; Dong, Huiliang
2017-08-01
Internet of Things (IoT) is an environment in which everywhere and every device became smart in a smart world. Internet of Things is growing vastly; it is an integrated system of uniquely identifiable communicating devices which exchange information in a connected network to provide extensive services. IoT devices have very limited memory, computational power, and power supply. Traditional operating systems (OS) have no way to meet the needs of IoT systems. In this paper, we thus analyze the challenges of IoT OS and survey applicable open source OSs.
NASA Astrophysics Data System (ADS)
Sturmberg, Björn C. P.; Dossou, Kokou B.; Lawrence, Felix J.; Poulton, Christopher G.; McPhedran, Ross C.; Martijn de Sterke, C.; Botten, Lindsay C.
2016-05-01
We describe EMUstack, an open-source implementation of the Scattering Matrix Method (SMM) for solving field problems in layered media. The fields inside nanostructured layers are described in terms of Bloch modes that are found using the Finite Element Method (FEM). Direct access to these modes allows the physical intuition of thin film optics to be extended to complex structures. The combination of the SMM and the FEM makes EMUstack ideally suited for studying lossy, high-index contrast structures, which challenge conventional SMMs.
Isolated Open Rotor Noise Prediction Assessment Using the F31A31 Historical Blade Set
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Jones, William T.; Boyd, D. Douglas, Jr.; Zawodny, Nikolas S.
2016-01-01
In an effort to mitigate next-generation fuel efficiency and environmental impact concerns for aviation, open rotor propulsion systems have received renewed interest. However, maintaining the high propulsive efficiency while simultaneously meeting noise goals has been one of the challenges in making open rotor propulsion a viable option. Improvements in prediction tools and design methodologies have opened the design space for next generation open rotor designs that satisfy these challenging objectives. As such, validation of aerodynamic and acoustic prediction tools has been an important aspect of open rotor research efforts. This paper describes validation efforts of a combined computational fluid dynamics and Ffowcs Williams and Hawkings equation methodology for open rotor aeroacoustic modeling. Performance and acoustic predictions were made for a benchmark open rotor blade set and compared with measurements over a range of rotor speeds and observer angles. Overall, the results indicate that the computational approach is acceptable for assessing low-noise open rotor designs. Additionally, this approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Open source modular ptosis crutch for the treatment of myasthenia gravis.
Saidi, Trust; Sivarasu, Sudesh; Douglas, Tania S
2018-02-01
Pharmacologic treatment of Myasthenia Gravis presents challenges due to poor tolerability in some patients. Conventional ptosis crutches have limitations such as interference with blinking which causes ocular surface drying, and frequent irritation of the eyes. To address this problem, a modular and adjustable ptosis crutch for elevating the upper eyelid in Myasthenia Gravis patients has been proposed as a non-surgical and low-cost solution. Areas covered: This paper reviews the literature on the challenges in the treatment of Myasthenia Gravis globally and focuses on a modular and adjustable ptosis crutch that has been developed by the Medical Device Laboratory at the University of Cape Town. Expert commentary: The new medical device has potential as a simple, effective and unobtrusive solution to elevate the drooping upper eyelid(s) above the visual axis without the need for medication and surgery. Access to the technology is provided through an open source platform which makes it available globally. Open access provides opportunities for further open innovation to address the current limitations of the device, ultimately for the benefit not only of people suffering from Myasthenia Gravis but also of those with ptosis from other aetiologies.
Web Server Security on Open Source Environments
NASA Astrophysics Data System (ADS)
Gkoutzelis, Dimitrios X.; Sardis, Manolis S.
Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.
Numerical Simulation of Liquids Draining From a Tank Using OpenFOAM
NASA Astrophysics Data System (ADS)
Sakri, Fadhilah Mohd; Sukri Mat Ali, Mohamed; Zaki Shaikh Salim, Sheikh Ahmad; Muhamad, Sallehuddin
2017-08-01
Accurate simulation of liquids draining is a challenging task. It involves two phases flow, i.e. liquid and air. In this study draining a liquid from a cylindrical tank is numerically simulated using OpenFOAM. OpenFOAM is an open source CFD package and it becomes increasingly popular among the academician and also industries. Comparisons with theoretical and results from previous published data confirmed that OpenFOAM is able to simulate the liquids draining very well. This is done using the gas-liquid interface solver available in the standard library of OpenFOAM. Additionally, this study was also able to explain the physics flow of the draining tank.
Gichoya, Judy W; Kohli, Marc; Ivange, Larry; Schmidt, Teri S; Purkayastha, Saptarshi
2018-05-10
Open-source development can provide a platform for innovation by seeking feedback from community members as well as providing tools and infrastructure to test new standards. Vendors of proprietary systems may delay adoption of new standards until there are sufficient incentives such as legal mandates or financial incentives to encourage/mandate adoption. Moreover, open-source systems in healthcare have been widely adopted in low- and middle-income countries and can be used to bridge gaps that exist in global health radiology. Since 2011, the authors, along with a community of open-source contributors, have worked on developing an open-source radiology information system (RIS) across two communities-OpenMRS and LibreHealth. The main purpose of the RIS is to implement core radiology workflows, on which others can build and test new radiology standards. This work has resulted in three major releases of the system, with current architectural changes driven by changing technology, development of new standards in health and imaging informatics, and changing user needs. At their core, both these communities are focused on building general-purpose EHR systems, but based on user contributions from the fringes, we have been able to create an innovative system that has been used by hospitals and clinics in four different countries. We provide an overview of the history of the LibreHealth RIS, the architecture of the system, overview of standards integration, describe challenges of developing an open-source product, and future directions. Our goal is to attract more participation and involvement to further develop the LibreHealth RIS into an Enterprise Imaging System that can be used in other clinical imaging including pathology and dermatology.
NASA's Earth Imagery Service as Open Source Software
NASA Astrophysics Data System (ADS)
De Cesare, C.; Alarcon, C.; Huang, T.; Roberts, J. T.; Rodriguez, J.; Cechini, M. F.; Boller, R. A.; Baynes, K.
2016-12-01
The NASA Global Imagery Browse Service (GIBS) is a software system that provides access to an archive of historical and near-real-time Earth imagery from NASA-supported satellite instruments. The imagery itself is open data, and is accessible via standards such as the Open Geospatial Consortium (OGC)'s Web Map Tile Service (WMTS) protocol. GIBS includes three core software projects: The Imagery Exchange (TIE), OnEarth, and the Meta Raster Format (MRF) project. These projects are developed using a variety of open source software, including: Apache HTTPD, GDAL, Mapserver, Grails, Zookeeper, Eclipse, Maven, git, and Apache Commons. TIE has recently been released for open source, and is now available on GitHub. OnEarth, MRF, and their sub-projects have been on GitHub since 2014, and the MRF project in particular receives many external contributions from the community. Our software has been successful beyond the scope of GIBS: the PO.DAAC State of the Ocean and COVERAGE visualization projects reuse components from OnEarth. The MRF source code has recently been incorporated into GDAL, which is a core library in many widely-used GIS software such as QGIS and GeoServer. This presentation will describe the challenges faced in incorporating open software and open data into GIBS, and also showcase GIBS as a platform on which scientists and the general public can build their own applications.
How to Perform a Literature Review with Free and Open Source Software
ERIC Educational Resources Information Center
Pearce, Joshua M.
2018-01-01
As it provides a firm foundation for advancing knowledge, a solid literature review is a critical feature of any academic investigation. Yet, there are several challenges in performing literature reviews including: (1) lack of access to the literature because of costs, (2) fracturing of the literature into many sources, lack of access and…
National Synchrotron Light Source II
Hill, John; Dooryhee, Eric; Wilkins, Stuart; Miller, Lisa; Chu, Yong
2018-01-16
NSLS-II is a synchrotron light source helping researchers explore solutions to the grand energy challenges faced by the nation, and open up new regimes of scientific discovery that will pave the way to discoveries in physics, chemistry, and biology â advances that will ultimately enhance national security and help drive the development of abundant, safe, and clean energy technologies.
psiTurk: An open-source framework for conducting replicable behavioral experiments online.
Gureckis, Todd M; Martin, Jay; McDonnell, John; Rich, Alexander S; Markant, Doug; Coenen, Anna; Halpern, David; Hamrick, Jessica B; Chan, Patricia
2016-09-01
Online data collection has begun to revolutionize the behavioral sciences. However, conducting carefully controlled behavioral experiments online introduces a number of new of technical and scientific challenges. The project described in this paper, psiTurk, is an open-source platform which helps researchers develop experiment designs which can be conducted over the Internet. The tool primarily interfaces with Amazon's Mechanical Turk, a popular crowd-sourcing labor market. This paper describes the basic architecture of the system and introduces new users to the overall goals. psiTurk aims to reduce the technical hurdles for researchers developing online experiments while improving the transparency and collaborative nature of the behavioral sciences.
NASA Astrophysics Data System (ADS)
Darch, Peter T.; Sands, Ashley E.
2016-06-01
Sky surveys, such as the Sloan Digital Sky Survey (SDSS) and the Large Synoptic Survey Telescope (LSST), generate data on an unprecedented scale. While many scientific projects span a few years from conception to completion, sky surveys are typically on the scale of decades. This paper focuses on critical challenges arising from long timescales, and how sky surveys address these challenges.We present findings from a study of LSST, comprising interviews (n=58) and observation. Conceived in the 1990s, the LSST Corporation was formed in 2003, and construction began in 2014. LSST will commence data collection operations in 2022 for ten years.One challenge arising from this long timescale is uncertainty about future needs of the astronomers who will use these data many years hence. Sources of uncertainty include scientific questions to be posed, astronomical phenomena to be studied, and tools and practices these astronomers will have at their disposal. These uncertainties are magnified by the rapid technological and scientific developments anticipated between now and the start of LSST operations.LSST is implementing a range of strategies to address these challenges. Some strategies involve delaying resolution of uncertainty, placing this resolution in the hands of future data users. Other strategies aim to reduce uncertainty by shaping astronomers’ data analysis practices so that these practices will integrate well with LSST once operations begin.One approach that exemplifies both types of strategy is the decision to make LSST data management software open source, even now as it is being developed. This policy will enable future data users to adapt this software to evolving needs. In addition, LSST intends for astronomers to start using this software well in advance of 2022, thereby embedding LSST software and data analysis approaches in the practices of astronomers.These findings strengthen arguments for making the software supporting sky surveys available as open source. Such arguments usually focus on reuse potential of software, and enhancing replicability of analyses. In this case, however, open source software also promises to mitigate the critical challenge of anticipating the needs of future data users.
Dessì, Alessia; Pani, Danilo; Raffo, Luigi
2014-08-01
Non-invasive fetal electrocardiography is still an open research issue. The recent publication of an annotated dataset on Physionet providing four-channel non-invasive abdominal ECG traces promoted an international challenge on the topic. Starting from that dataset, an algorithm for the identification of the fetal QRS complexes from a reduced number of electrodes and without any a priori information about the electrode positioning has been developed, entering into the top ten best-performing open-source algorithms presented at the challenge.In this paper, an improved version of that algorithm is presented and evaluated exploiting the same challenge metrics. It is mainly based on the subtraction of the maternal QRS complexes in every lead, obtained by synchronized averaging of morphologically similar complexes, the filtering of the maternal P and T waves and the enhancement of the fetal QRS through independent component analysis (ICA) applied on the processed signals before a final fetal QRS detection stage. The RR time series of both the mother and the fetus are analyzed to enhance pseudoperiodicity with the aim of correcting wrong annotations. The algorithm has been designed and extensively evaluated on the open dataset A (N = 75), and finally evaluated on datasets B (N = 100) and C (N = 272) to have the mean scores over data not used during the algorithm development. Compared to the results achieved by the previous version of the algorithm, the current version would mark the 5th and 4th position in the final ranking related to the events 1 and 2, reserved to the open-source challenge entries, taking into account both official and unofficial entrants. On dataset A, the algorithm achieves 0.982 median sensitivity and 0.976 median positive predictivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Basements in climates 6 & 7 can account for a fraction of a home's total heat loss when fully conditioned. These foundations are a source of moisture, with convection in open block cavities redistributing water from the wall base, usually when heating. Even when block cavities are capped, the cold foundation concrete can act as a moisture source for wood rim joist components that are in contact with the wall. As below-grade basements are increasingly retrofitted for habitable space, cold foundation walls pose increased challenges for moisture durability, energy use, and occupant comfort. To address this challenge, the NorthernSTAR Buildingmore » America Partnership evaluated a retrofit insulation strategy for foundations that is designed for use with open-core concrete block foundation walls. The three main goals were to improve moisture control, improve occupant comfort, and reduce heat loss.« less
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
Open Source GIS based integrated watershed management
NASA Astrophysics Data System (ADS)
Byrne, J. M.; Lindsay, J.; Berg, A. A.
2013-12-01
Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address challenging resource management issues in industry, government and nongovernmental agencies. Current research and analysis tools were developed to manage meteorological, climatological, and land and water resource data efficiently at high resolution in space and time. The deliverable for this work is a Whitebox-GENESYS open-source resource management capacity with routines for GIS based watershed management including water in agriculture and food production. We are adding urban water management routines through GENESYS in 2013-15 with an engineering PhD candidate. Both Whitebox-GAT and GENESYS are already well-established tools. The proposed research will combine these products to create an open-source geomatics based water resource management tool that is revolutionary in both capacity and availability to a wide array of Canadian and global users
ERIC Educational Resources Information Center
Schonfeld, Roger C.; Fenton, Eileen Gifford
2005-01-01
Without question, the ongoing transition from print to electronic periodicals has challenged librarians to rethink their strategies. While some effects of this change have been immediately apparent--greater breadth of material, easier access, exposure to new sources, publisher package deals, and open access--the broader outcomes on library…
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.
Open Data Infrastructures And The Future Of Science
NASA Astrophysics Data System (ADS)
Boulton, G. S.
2016-12-01
Open publication of the evidence (the data) supporting a scientific claim has been the bedrock on which the scientific advances of the modern era of science have been built. It is also of immense importance in confronting three challenges unleashed by the digital revolution. The first is the threat the digital data storm poses to the principle of "scientific self-correction", in which false concepts are weeded out because of a demonstrable failure in logic or in the replication of observations or experiments. Large and complex data volumes are difficult to make openly available in ways that make rigorous scrutiny possible. Secondly, linking and integrating data from different sources about the same phenomena have created profound new opportunities for understanding the Earth. If data are neither accessible nor useable, such opportunities cannot be seized. Thirdly, open access publication, open data and ubiquitous modern communications enhance the prospects for an era of "Open Science" in which science emerges from behind its laboratory doors to engage in co-production of knowledge with other stakeholders in addressing major contemporary challenges to human society, in particular the need for long term thinking about planetary sustainability. If the benefits of an open data regime are to be realised, only a small part of the challenge lies in providing "hard" infrastructure. The major challenges lie in the "soft" infrastructure of relationships between the components of national science systems, of analytic and software tools, of national and international standards and the normative principles adopted by scientists themselves. The principles that underlie these relationships, the responsibilities of key actors and the rules of the game needed to maximise national performance and facilitate international collaboration are set out in an International Accord on Open Data.
A Scalable, Open Source Platform for Data Processing, Archiving and Dissemination
2016-01-01
Object Oriented Data Technology (OODT) big data toolkit developed by NASA and the Work-flow INstance Generation and Selection (WINGS) scientific work...to several challenge big data problems and demonstrated the utility of OODT-WINGS in addressing them. Specific demonstrated analyses address i...source software, Apache, Object Oriented Data Technology, OODT, semantic work-flows, WINGS, big data , work- flow management 16. SECURITY CLASSIFICATION OF
pyLIMA : an open source microlensing software
NASA Astrophysics Data System (ADS)
Bachelet, Etienne
2017-01-01
Planetary microlensing is a unique tool to detect cold planets around low-mass stars which is approaching a watershed in discoveries as near-future missions incorporate dedicated surveys. NASA and ESA have decided to complement WFIRST-AFTA and Euclid with microlensing programs to enrich our statistics about this planetary population. Of the nany challenges in- herent in these missions, the data analysis is of primary importance, yet is often perceived as time consuming, complex and daunting barrier to participation in the field. We present the first open source modeling software to conduct a microlensing analysis. This software is written in Python and use as much as possible existing packages.
Shipping Science Worldwide with Open Source Containers
NASA Astrophysics Data System (ADS)
Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.
2014-12-01
Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.
Data Visualization Challenges and Opportunities in User-Oriented Application Development
NASA Astrophysics Data System (ADS)
Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.
2015-12-01
This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.
Issues of Food Chain Security and Case Studies in the Czech Army
NASA Astrophysics Data System (ADS)
Komar, Ales; Vasicka, Pavlina
Food supply system is fundamental extremely open complex. Global challenge is acknowledged and must be considered because food is important source of existence and can be used as a desirable terrorist vehicle. Raw material and food featured intentional versus accidental contamination. Manifestation of global challenges, aspiration for sustainable development and appearance of terrorism create the new paradigm for threats to food safety and defence management.
CAGE IIIA Distributed Simulation Design Methodology
2014-05-01
2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this
Web catalog of oceanographic data using GeoNetwork
NASA Astrophysics Data System (ADS)
Marinova, Veselka; Stefanov, Asen
2017-04-01
Most of the data collected, analyzed and used by Bulgarian oceanographic data center (BgODC) from scientific cruises, argo floats, ferry boxes and real time operating systems are spatially oriented and need to be displayed on the map. The challenge is to make spatial information more accessible to users, decision makers and scientists. In order to meet this challenge, BgODC concentrate its efforts on improving dynamic and standardized access to their geospatial data as well as those from various related organizations and institutions. BgODC currently is implementing a project to create a geospatial portal for distributing metadata and search, exchange and harvesting spatial data. There are many open source software solutions able to create such spatial data infrastructure (SDI). Finally, the GeoNetwork open source is chosen, as it is already widespread. This software is free, effective and "cheap" solution for implementing SDI at organization level. It is platform independent and runs under many operating systems. Filling of the catalog goes through these practical steps: • Managing and storing data reliably within MS SQL spatial data base; • Registration of maps and data of various formats and sources in GeoServer (most popular open source geospatial server embedded with GeoNetwork) ; • Filling added meta data and publishing geospatial data at the desktop of GeoNetwork. GeoServer and GeoNetwork are based on Java so they require installing of a servlet engine like Tomcat. The experience gained from the use of GeoNetwork Open Source confirms that the catalog meets the requirements for data management and is flexible enough to customize. Building the catalog facilitates sustainable data exchange between end users. The catalog is a big step towards implementation of the INSPIRE directive due to availability of many features necessary for producing "INSPIRE compliant" metadata records. The catalog now contains all available GIS data provided by BgODC for Internet access. Searching data within the catalog is based upon geographic extent, theme type and free text search.
Managing and Integrating Open Environmental Data - Technological Requirements and Challenges
NASA Astrophysics Data System (ADS)
Devaraju, Anusuriya; Kunkel, Ralf; Jirka, Simon
2014-05-01
Understanding environment conditions and trends requires information. This information is usually generated from sensor observations. Today, several infrastructures (e.g., GEOSS, EarthScope, NEON, NETLAKE, OOI, TERENO, WASCAL, and PEER-EurAqua) have been deployed to promote full and open exchange of environmental data. Standards for interfaces as well as data models/formats (OGC, CUAHSI, INSPIRE, SEE Grid, ISO) and open source tools have been developed to support seamless data exchange between various domains and organizations. In spite of this growing interest, it remains a challenge to manage and integrate open environmental data on the fly due to the distributed and heterogeneous nature of the data. Intuitive tools and standardized interfaces are vital to hide the technical complexity of underlying data management infrastructures. Meaningful descriptions of raw sensor data are necessary to achieve interoperability among different sources. As raw sensor data sets usually goes through several layers of summarization and aggregation, metadata and quality measures associated with these should be captured. Further processing of sensor data sets requires that they should be made compatible with existing environmental models. We need data policies and management plans on how to handle and publish open sensor data coming from different institutions. Clearly, a better management and usability of open environmental data is crucial, not only to gather large amounts of data, but also to cater various aspects such as data integration, privacy and trust, uncertainty, quality control, visualization, and data management policies. The proposed talk presents several key findings in terms of requirements, ongoing developments and technical challenges concerning these aspects from our recent work. This includes two workshops on open observation data and supporting tools, as well as the long-term environmental monitoring initiatives such as TERENO and TERENO-MED. Workshops Details: Spin the Sensor Web: Sensor Web Workshop 2013, Muenster, 21st-22nd November 2013 (http://52north.org/news/spin-the-sensor-web-sensor-web-workshop-2013) Special Session on Management of Open Environmental Observation Data - MOEOD 2014, Lisbon, 8th January 2014 (http://www.sensornets.org/MOEOD.aspx?y=2014) Monitoring Networks: TERENO : http://teodoor.icg.kfa-juelich.de/ TERENO-MED : http://www.tereno-med.net/
Open-Source as a strategy for operational software - the case of Enki
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2014-05-01
Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has implications for building and maintaining competence around the source code and the advanced hydrological and statistical routines in Enki. Originally developed by hydrologists, the Enki code is now approaching a state where maintenance requires a background in professional software development. Without the advantage of proprietary source code, both hydrologic improvements and software maintenance depend on donations or development support on a case-to-case basis, a situation well known within the open source community. It remains to see whether these mechanisms suffice to keep Enki at the maintenance level required by the hydropower sector. ENKI is available from www.opensource-enki.org.
Addressing Challenges in the Acquisition of Secure Software Systems With Open Architectures
2012-04-30
as a “broker” to market specific research topics identified by our sponsors to NPS graduate students. This three-pronged approach provides for a...breaks, and the day-ending socials. Many of our researchers use these occasions to establish new teaming arrangements for future research work. In the...software (CSS) and open source software (OSS). Federal government acquisition policy, as well as many leading enterprise IT centers, now encourage the use
Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S
2018-04-30
Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid detection, facilitating use by a broad range of impedimetric biosensor users. This post hoc analysis tool can serve as a launchpad for the convergence of nanobiosensors in planetary health monitoring applications based on mobile phone hardware.
NASA Astrophysics Data System (ADS)
Fulker, D. W.; Gallagher, J. H. R.
2015-12-01
OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of engineering expertise, via contracts and grants, is the economic engine. Hence sustainability, as needed to address global grand challenges in geoscience, depends on agencies' and others' abilities and willingness to offer grants and let contracts for continually upgrading open-source software from OPeNDAP and others.
NASA Astrophysics Data System (ADS)
Zhou, Xiaochi; Aurell, Johanna; Mitchell, William; Tabor, Dennis; Gullett, Brian
2017-04-01
Characterizing highly dynamic, transient, and vertically lofted emissions from open area sources poses unique measurement challenges. This study developed and applied a multipollutant sensor and time-integrated sampler system for use on mobile applications such as vehicles, tethered balloons (aerostats) and unmanned aerial vehicles (UAVs) to determine emission factors. The system is particularly applicable to open area sources, such as forest fires, due to its light weight (3.5 kg), compact size (6.75 L), and internal power supply. The sensor system, termed ;Kolibri;, consists of sensors measuring CO2 and CO, and samplers for particulate matter (PM) and volatile organic compounds (VOCs). The Kolibri is controlled by a microcontroller which can record and transfer data in real time through a radio module. Selection of the sensors was based on laboratory testing for accuracy, response delay and recovery, cross-sensitivity, and precision. The Kolibri was compared against rack-mounted continuous emissions monitoring system (CEMs) and another mobile sampling instrument (the ;Flyer;) that has been used in over ten open area pollutant sampling events. Our results showed that the time series of CO, CO2, and PM2.5 concentrations measured by the Kolibri agreed well with those from the CEMs and the Flyer, with a laboratory-tested percentage error of 4.9%, 3%, and 5.8%, respectively. The VOC emission factors obtained using the Kolibri were consistent with existing literature values that relate concentration to modified combustion efficiency. The potential effect of rotor downwash on particle sampling was investigated in an indoor laboratory and the preliminary results suggested that its influence is minimal. Field application of the Kolibri sampling open detonation plumes indicated that the CO and CO2 sensors responded dynamically and their concentrations co-varied with emission transients. The Kolibri system can be applied to various challenging open area scenarios such as fires, lagoons, flares, and landfills.
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
Design Science Methodology Applied to a Chemical Surveillance Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhuanyi; Han, Kyungsik; Charles-Smith, Lauren E.
Public health surveillance systems gain significant benefits from integrating existing early incident detection systems,supported by closed data sources, with open source data.However, identifying potential alerting incidents relies on finding accurate, reliable sources and presenting the high volume of data in a way that increases analysts work efficiency; a challenge for any system that leverages open source data. In this paper, we present the design concept and the applied design science research methodology of ChemVeillance, a chemical analyst surveillance system.Our work portrays a system design and approach that translates theoretical methodology into practice creating a powerful surveillance system built for specificmore » use cases.Researchers, designers, developers, and related professionals in the health surveillance community can build upon the principles and methodology described here to enhance and broaden current surveillance systems leading to improved situational awareness based on a robust integrated early warning system.« less
NASA Astrophysics Data System (ADS)
Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.
2014-12-01
This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.
Free and open-source automated 3-D microscope.
Wijnen, Bas; Petersen, Emily E; Hunt, Emily J; Pearce, Joshua M
2016-11-01
Open-source technology not only has facilitated the expansion of the greater research community, but by lowering costs it has encouraged innovation and customizable design. The field of automated microscopy has continued to be a challenge in accessibility due the expense and inflexible, noninterchangeable stages. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10 μm at the maximum speed and 27 μm at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. It is larger than all commercial alternatives, and is thus capable of high-depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3-9% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
CosmoQuest: A Cyber-Infrastructure for Crowdsourcing Planetary Surface Mapping and More
NASA Astrophysics Data System (ADS)
Gay, P.; Lehan, C.; Moore, J.; Bracey, G.; Gugliucci, N.
2014-04-01
The design and implementation of programs to crowdsource science presents a unique set of challenges to system architects, programmers, and designers. The CosmoQuest Citizen Science Builder (CSB) is an open source platform designed to take advantage of crowd computing and open source platforms to solve crowdsourcing problems in Planetary Science. CSB combines a clean user interface with a powerful back end to allow the quick design and deployment of citizen science sites that meet the needs of both the random Joe Public, and the detail driven Albert Professional. In this talk, the software will be overviewed, and the results of usability testing and accuracy testing with both citizen and professional scientists will be discussed.
Open Source Virtual Worlds and Low Cost Sensors for Physical Rehab of Patients with Chronic Diseases
NASA Astrophysics Data System (ADS)
Romero, Salvador J.; Fernandez-Luque, Luis; Sevillano, José L.; Vognild, Lars
For patients with chronic diseases, exercise is a key part of rehab to deal better with their illness. Some of them do rehabilitation at home with telemedicine systems. However, keeping to their exercising program is challenging and many abandon the rehabilitation. We postulate that information technologies for socializing and serious games can encourage patients to keep doing physical exercise and rehab. In this paper we present Virtual Valley, a low cost telemedicine system for home exercising, based on open source virtual worlds and utilizing popular low cost motion controllers (e.g. Wii Remote) and medical sensors. Virtual Valley allows patient to socialize, learn, and play group based serious games while exercising.
Shenoy, Shailesh M
2016-07-01
A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software's support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity.
NASA Astrophysics Data System (ADS)
Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang
2016-02-01
With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.
Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S
2014-01-01
Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu.
Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S
2014-01-01
Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu PMID:24131510
Open heart surgery in Nigeria; a work in progress.
Falase, Bode; Sanusi, Michael; Majekodunmi, Adetinuwe; Animasahun, Barakat; Ajose, Ifeoluwa; Idowu, Ariyo; Oke, Adewale
2013-01-12
There has been limited success in establishing Open Heart Surgery programmes in Nigeria despite the high prevalence of structural heart disease and the large number of Nigerian patients that travel abroad for Open Heart Surgery. The challenges and constraints to the development of Open Heart Surgery in Nigeria need to be identified and overcome. The aim of this study is to review the experience with Open Heart Surgery at the Lagos State University Teaching Hospital and highlight the challenges encountered in developing this programme. This is a retrospective study of patients that underwent Open Heart Surgery in our institution. The source of data was a prospectively maintained database. Extracted data included patient demographics, indication for surgery, euroscore, cardiopulmonary bypass time, cross clamp time, complications and patient outcome. 51 Open Heart Surgery procedures were done between August 2004 and December 2011. There were 21 males and 30 females. Mean age was 29 ± 15.6 years. The mean euroscore was 3.8 ± 2.1. The procedures done were Mitral Valve Replacement in 15 patients (29.4%), Atrial Septal Defect Repair in 14 patients (27.5%), Ventricular Septal Defect Repair in 8 patients (15.7%), Aortic Valve Replacement in 5 patients (9.8%), excision of Left Atrial Myxoma in 2 patients (3.9%), Coronary Artery Bypass Grafting in 2 patients (3.9%), Bidirectional Glenn Shunts in 2 patients (3.9%), Tetralogy of Fallot repair in 2 patients (3.9%) and Mitral Valve Repair in 1 patient (2%). There were 9 mortalities (17.6%) in this series. Challenges encountered included the low volume of cases done, an unstable working environment, limited number of trained staff, difficulty in obtaining laboratory support, limited financial support and difficulty in moving away from the Cardiac Mission Model. The Open Heart Surgery program in our institution is still being developed but the identified challenges need to be overcome if this program is to be sustained. Similar challenges will need to be overcome by other cardiac stakeholders if other OHS programs are to be developed and sustained in Nigeria.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
When nutrients impact estuarine water quality, scientists and managers instinctively focus on quantifying and controlling land-based sources. However, in Greenwich Bay, RI, the estuary opens onto a larger and more intensively fertilized coastal water body (Narragansett Bay). Prev...
Factors Influencing F/OSS Cloud Computing Software Product Success: A Quantitative Study
ERIC Educational Resources Information Center
Letort, D. Brian
2012-01-01
Cloud Computing introduces a new business operational model that allows an organization to shift information technology consumption from traditional capital expenditure to operational expenditure. This shift introduces challenges from both the adoption and creation vantage. This study evaluates factors that influence Free/Open Source Software…
Open Collaboration: A Problem Solving Strategy That Is Redefining NASA's Innovative Spirit
NASA Technical Reports Server (NTRS)
Rando, Cynthia M.; Fogarty, Jennifer A.; Richard, Elizabeth E.; Davis, Jeffrey R.
2011-01-01
In 2010, NASA?s Space Life Sciences Directorate announced the successful results from pilot experiments with open innovation methodologies. Specifically, utilization of internet based external crowd sourcing platforms to solve challenging problems in human health and performance related to the future of spaceflight. The follow-up to this success was an internal crowd sourcing pilot program entitled NASA@work, which was supported by the InnoCentive@work software platform. The objective of the NASA@work pilot was to connect the collective knowledge of individuals from all areas within the NASA organization via a private web based environment. The platform provided a venue for NASA Challenge Owners, those looking for solutions or new ideas, to pose challenges to internal solvers, those within NASA with the skill and desire to create solutions. The pilot was launched in 57 days, a record for InnoCentive and NASA, and ran for three months with a total of 20 challenges posted Agency wide. The NASA@work pilot attracted over 6000 participants throughout NASA with a total of 183 contributing solvers for the 20 challenges posted. At the time of the pilot?s closure, solvers provided viable solutions and ideas for 17 of the 20 posted challenges. The solver community provided feedback on the pilot describing it as a barrier breaking activity, conveying that there was a satisfaction associated with helping co-workers, that it was "fun" to think about problems outside normal work boundaries, and it was nice to learn what challenges others were facing across the agency. The results and the feedback from the solver community have demonstrated the power and utility of an internal collaboration tool, such as NASA@work.
The Open Microscopy Environment: open image informatics for the biological sciences
NASA Astrophysics Data System (ADS)
Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.
2016-07-01
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
Building a Snow Data Management System using Open Source Software (and IDL)
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.
2012-12-01
At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01
Moody, George B; Mark, Roger G; Goldberger, Ary L
2011-01-01
PhysioNet provides free web access to over 50 collections of recorded physiologic signals and time series, and related open-source software, in support of basic, clinical, and applied research in medicine, physiology, public health, biomedical engineering and computing, and medical instrument design and evaluation. Its three components (PhysioBank, the archive of signals; PhysioToolkit, the software library; and PhysioNetWorks, the virtual laboratory for collaborative development of future PhysioBank data collections and PhysioToolkit software components) connect researchers and students who need physiologic signals and relevant software with researchers who have data and software to share. PhysioNet's annual open engineering challenges stimulate rapid progress on unsolved or poorly solved questions of basic or clinical interest, by focusing attention on achievable solutions that can be evaluated and compared objectively using freely available reference data.
Combining Open-Source Packages for Planetary Exploration
NASA Astrophysics Data System (ADS)
Schmidt, Albrecht; Grieger, Björn; Völk, Stefan
2015-04-01
The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.
Analyzing huge pathology images with open source software.
Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc
2013-06-06
Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.
Analyzing huge pathology images with open source software
2013-01-01
Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Langley, Shaun A.; Messina, Joseph P.
2011-01-01
The past decade has seen an explosion in the availability of spatial data not only for researchers, but the public alike. As the quantity of data increases, the ability to effectively navigate and understand the data becomes more challenging. Here we detail a conceptual model for a spatially explicit database management system that addresses the issues raised with the growing data management problem. We demonstrate utility with a case study in disease ecology: to develop a multi-scale predictive model of African Trypanosomiasis in Kenya. International collaborations and varying technical expertise necessitate a modular open-source software solution. Finally, we address three recurring problems with data management: scalability, reliability, and security. PMID:21686072
Langley, Shaun A; Messina, Joseph P
2011-01-01
The past decade has seen an explosion in the availability of spatial data not only for researchers, but the public alike. As the quantity of data increases, the ability to effectively navigate and understand the data becomes more challenging. Here we detail a conceptual model for a spatially explicit database management system that addresses the issues raised with the growing data management problem. We demonstrate utility with a case study in disease ecology: to develop a multi-scale predictive model of African Trypanosomiasis in Kenya. International collaborations and varying technical expertise necessitate a modular open-source software solution. Finally, we address three recurring problems with data management: scalability, reliability, and security.
Voet, T; Devolder, P; Pynoo, B; Vercruysse, J; Duyck, P
2007-11-01
This paper hopes to share the insights we experienced during designing, building, and running an indexing solution for a large set of radiological reports and images in a production environment for more than 3 years. Several technical challenges were encountered and solved in the course of this project. One hundred four million words in 1.8 million radiological reports from 1989 to the present were indexed and became instantaneously searchable in a user-friendly fashion; the median query duration is only 31 ms. Currently, our highly tuned index holds 332,088 unique words in four languages. The indexing system is feature-rich and language-independent and allows for making complex queries. For research and training purposes it certainly is a valuable and convenient addition to our radiology informatics toolbox. Extended use of open-source technology dramatically reduced both implementation time and cost. All software we developed related to the indexing project has been made available to the open-source community covered by an unrestricted Berkeley Software Distribution-style license.
Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C
2017-10-17
Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
Magare, Steve; Monda, Jonathan; Kamau, Onesmus; Houston, Stuart; Fraser, Hamish; Powell, John; English, Mike; Paton, Chris
2018-01-01
Background The Kenyan government, working with international partners and local organizations, has developed an eHealth strategy, specified standards, and guidelines for electronic health record adoption in public hospitals and implemented two major health information technology projects: District Health Information Software Version 2, for collating national health care indicators and a rollout of the KenyaEMR and International Quality Care Health Management Information Systems, for managing 600 HIV clinics across the country. Following these projects, a modified version of the Open Medical Record System electronic health record was specified and developed to fulfill the clinical and administrative requirements of health care facilities operated by devolved counties in Kenya and to automate the process of collating health care indicators and entering them into the District Health Information Software Version 2 system. Objective We aimed to present a descriptive case study of the implementation of an open source electronic health record system in public health care facilities in Kenya. Methods We conducted a landscape review of existing literature concerning eHealth policies and electronic health record development in Kenya. Following initial discussions with the Ministry of Health, the World Health Organization, and implementing partners, we conducted a series of visits to implementing sites to conduct semistructured individual interviews and group discussions with stakeholders to produce a historical case study of the implementation. Results This case study describes how consultants based in Kenya, working with developers in India and project stakeholders, implemented the new system into several public hospitals in a county in rural Kenya. The implementation process included upgrading the hospital information technology infrastructure, training users, and attempting to garner administrative and clinical buy-in for adoption of the system. The initial deployment was ultimately scaled back due to a complex mix of sociotechnical and administrative issues. Learning from these early challenges, the system is now being redesigned and prepared for deployment in 6 new counties across Kenya. Conclusions Implementing electronic health record systems is a challenging process in high-income settings. In low-income settings, such as Kenya, open source software may offer some respite from the high costs of software licensing, but the familiar challenges of clinical and administration buy-in, the need to adequately train users, and the need for the provision of ongoing technical support are common across the North-South divide. Strategies such as creating local support teams, using local development resources, ensuring end user buy-in, and rolling out in smaller facilities before larger hospitals are being incorporated into the project. These are positive developments to help maintain momentum as the project continues. Further integration with existing open source communities could help ongoing development and implementations of the project. We hope this case study will provide some lessons and guidance for other challenging implementations of electronic health record systems as they continue across Africa. PMID:29669709
SensorWeb Hub infrastructure for open access to scientific research data
NASA Astrophysics Data System (ADS)
de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena
2015-04-01
The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested by the different research projects. The SWH components are organized in typical client-server architecture and interact from the sensing process to the representation of the results to the end-users. The Web Application enables to view and analyse the data stored in the GeoDB. The interface is designed following Internet browsers specifications allowing the visualization of collected data in different formats (tabular, chart and geographic map). The services for the dissemination of geo-referenced information, adopt the OGC specifications. SWH is a bottom-up collaborative initiative to share real time research data and pave the way for a open innovation approach in the scientific research. Until now this framework has been used for several WebGIS applications and WebApp for environmental monitoring at different temporal and spatial scales.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
Ammonia losses from a southern high plains dairy during summer
USDA-ARS?s Scientific Manuscript database
Animal agriculture is a significant source of ammonia (NH3). Cattle excrete a large amount of nitrogen (N); most urinary N is converted to NH3, volatilized and lost to the atmosphere. Open lot dairies on the southern High Plains are a growing industry and face environmental challenges including repo...
ERIC Educational Resources Information Center
Islam, Tofazzal
2011-01-01
This paper examines how this mega-university offers increasing access to cost-effective, equitable and flexible higher education by analyzing data from primary and secondary sources, identifies challenges impacting the continued growth of enrollment in distance education, and outlines opportunities for increasing access to higher education through…
OpenDrop: An Integrated Do-It-Yourself Platform for Personal Use of Biochips
Alistar, Mirela; Gaudenz, Urs
2017-01-01
Biochips, or digital labs-on-chip, are developed with the purpose of being used by laboratory technicians or biologists in laboratories or clinics. In this article, we expand this vision with the goal of enabling everyone, regardless of their expertise, to use biochips for their own personal purposes. We developed OpenDrop, an integrated electromicrofluidic platform that allows users to develop and program their own bio-applications. We address the main challenges that users may encounter: accessibility, bio-protocol design and interaction with microfluidics. OpenDrop consists of a do-it-yourself biochip, an automated software tool with visual interface and a detailed technique for at-home operations of microfluidics. We report on two years of use of OpenDrop, released as an open-source platform. Our platform attracted a highly diverse user base with participants originating from maker communities, academia and industry. Our findings show that 47% of attempts to replicate OpenDrop were successful, the main challenge remaining the assembly of the device. In terms of usability, the users managed to operate their platforms at home and are working on designing their own bio-applications. Our work provides a step towards a future in which everyone will be able to create microfluidic devices for their personal applications, thereby democratizing parts of health care. PMID:28952524
OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
2014-01-01
Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less
Crawling The Web for Libre: Selecting, Integrating, Extending and Releasing Open Source Software
NASA Astrophysics Data System (ADS)
Truslove, I.; Duerr, R. E.; Wilcox, H.; Savoie, M.; Lopez, L.; Brandt, M.
2012-12-01
Libre is a project developed by the National Snow and Ice Data Center (NSIDC). Libre is devoted to liberating science data from its traditional constraints of publication, location, and findability. Libre embraces and builds on the notion of making knowledge freely available, and both Creative Commons licensed content and Open Source Software are crucial building blocks for, as well as required deliverable outcomes of the project. One important aspect of the Libre project is to discover cryospheric data published on the internet without prior knowledge of the location or even existence of that data. Inspired by well-known search engines and their underlying web crawling technologies, Libre has explored tools and technologies required to build a search engine tailored to allow users to easily discover geospatial data related to the polar regions. After careful consideration, the Libre team decided to base its web crawling work on the Apache Nutch project (http://nutch.apache.org). Nutch is "an open source web-search software project" written in Java, with good documentation, a significant user base, and an active development community. Nutch was installed and configured to search for the types of data of interest, and the team created plugins to customize the default Nutch behavior to better find and categorize these data feeds. This presentation recounts the Libre team's experiences selecting, using, and extending Nutch, and working with the Nutch user and developer community. We will outline the technical and organizational challenges faced in order to release the project's software as Open Source, and detail the steps actually taken. We distill these experiences into a set of heuristics and recommendations for using, contributing to, and releasing Open Source Software.
DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.
Dryden, Michael D M; Wheeler, Aaron R
2015-01-01
Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.
NASA Astrophysics Data System (ADS)
Javed, Kamran; Gouriveau, Rafael; Zerhouni, Noureddine
2017-09-01
Integrating prognostics to a real application requires a certain maturity level and for this reason there is a lack of success stories about development of a complete Prognostics and Health Management system. In fact, the maturity of prognostics is closely linked to data and domain specific entities like modeling. Basically, prognostics task aims at predicting the degradation of engineering assets. However, practically it is not possible to precisely predict the impending failure, which requires a thorough understanding to encounter different sources of uncertainty that affect prognostics. Therefore, different aspects crucial to the prognostics framework, i.e., from monitoring data to remaining useful life of equipment need to be addressed. To this aim, the paper contributes to state of the art and taxonomy of prognostics approaches and their application perspectives. In addition, factors for prognostics approach selection are identified, and new case studies from component-system level are discussed. Moreover, open challenges toward maturity of the prognostics under uncertainty are highlighted and scheme for an efficient prognostics approach is presented. Finally, the existing challenges for verification and validation of prognostics at different technology readiness levels are discussed with respect to open challenges.
VIVO Open Source Software: Connecting Facilities to Promote Discovery and Further Research.
NASA Astrophysics Data System (ADS)
Gross, M. B.; Rowan, L. R.; Mayernik, M. S.; Daniels, M. D.; Stott, D.; Allison, J.; Maull, K. E.; Krafft, D. B.; Khan, H.
2016-12-01
EarthCollab (http://earthcube.org/group/earthcollab), a National Science Foundation (NSF) EarthCube Building Block project, has adapted an open source semantic web application, VIVO, for use within the earth science domain. EarthCollab is a partnership between UNAVCO, an NSF facility supporting research through geodetic services, the Earth Observing Laboratory (EOL) at the National Center for Atmospheric Research (NCAR), and Cornell University, where VIVO was created to highlight the scholarly output of researchers at universities. Two public sites have been released: Connect UNAVCO (connect.unavco.org) and Arctic Data Connects (vivo.eol.ucar.edu). The core VIVO software and ontology have been extended to work better with concepts necessary for capturing work within UNAVCO's and EOL's province such as principal investigators for continuous GPS/GNSS stations at UNAVCO and keywords describing cruise datasets at EOL. The sites increase discoverability of large and diverse data archives by linking data with people, research, and field projects. Disambiguation is a major challenge when using VIVO and open data when "anyone can say anything about anything." Concepts and controlled vocabularies help to build consistent and easily searchable connections within VIVO. We use aspects of subject heading services such as FAST and LOC, as well as AGU and GSA fields of research and subject areas to reveal connections, especially with VIVO instances at other institutions. VIVO works effectively with persistent IDs and the projects strive to utilize publication and data DOIs, ORCIDs for people, and ISNI and GRID for organizations. ORCID, an open source project, is very useful for disambiguation and unlike other identifier systems for people developed by publishers, makes public data available via an API. VIVO utilizes Solr and Freemarker, which are open source search engine and templating technologies, respectively. Additionally, a handful of popular open source libraries and applications are being used in the project such as D3.js, jQuery, Leaflet, and Elasticsearch. Our implementation of these open source projects within VIVO is available for adaptation by other institutions using VIVO via GitHub (git.io/vG9AJ).
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
Laajala, Teemu D; Murtojärvi, Mika; Virkki, Arho; Aittokallio, Tero
2018-06-15
Prognostic models are widely used in clinical decision-making, such as risk stratification and tailoring treatment strategies, with the aim to improve patient outcomes while reducing overall healthcare costs. While prognostic models have been adopted into clinical use, benchmarking their performance has been difficult due to lack of open clinical datasets. The recent DREAM 9.5 Prostate Cancer Challenge carried out an extensive benchmarking of prognostic models for metastatic Castration-Resistant Prostate Cancer (mCRPC), based on multiple cohorts of open clinical trial data. We make available an open-source implementation of the top-performing model, ePCR, along with an extended toolbox for its further re-use and development, and demonstrate how to best apply the implemented model to real-world data cohorts of advanced prostate cancer patients. The open-source R-package ePCR and its reference documentation are available at the Central R Archive Network (CRAN): https://CRAN.R-project.org/package=ePCR. R-vignette provides step-by-step examples for the ePCR usage. Supplementary data are available at Bioinformatics online.
Lai, Chen-Yen; Chien, Chih-Chun
2016-01-01
While batteries offer electronic source and sink for electronic devices, atomic analogues of source and sink and their theoretical descriptions have been a challenge in cold-atom systems. Here we consider dynamically emerged local potentials as controllable source and sink for bosonic atoms. Although a sink potential can collect bosons in equilibrium and indicate its usefulness in the adiabatic limit, sudden switching of the potential exhibits low effectiveness in pushing bosons into it. This is due to conservation of energy and particle in isolated systems such as cold atoms. By varying the potential depth and interaction strength, the systems can further exhibit averse response, where a deeper emerged potential attracts less bosonic atoms into it. To explore possibilities for improving the effectiveness, we investigate what types of system-environment coupling can help bring bosons into a dynamically emerged sink, and a Lindblad operator corresponding to local cooling is found to serve the purpose. PMID:27849034
NASA Astrophysics Data System (ADS)
Baum, R. L.; Coe, J. A.; Godt, J.; Kean, J. W.
2014-12-01
Heavy rainfall during 9 - 13 September 2013 induced about 1100 debris flows in the foothills and mountains of the northern Colorado Front Range. Eye-witness accounts and fire-department records put the times of greatest landslide activity during the times of heaviest rainfall on September 12 - 13. Antecedent soil moisture was relatively low, particularly at elevations below 2250 m where many of the debris flows occurred, based on 45 - 125 mm of summer precipitation and absence of rainfall for about 2 weeks before the storm. Mapping from post-event imagery and field observations indicated that most debris flows initiated as small, shallow landslides. These landslides typically formed in colluvium that consisted of angular clasts in a sandy or silty matrix, depending on the nature of the parent bedrock. Weathered bedrock was partially exposed in the basal surfaces of many of the shallow source areas at depths ranging from 0.2 to 5 m, and source areas commonly occupied less than 500 m2. Although 49% of the source areas occurred in swales and 3 % in channels, where convergent flow might have contributed to pore-pressure build up during the rainfall, 48% of the source areas occurred on open slopes. Upslope contributing areas of most landslides (58%) were small (< 1000 m2) and 78% of the slides occurred on south-facing slopes (90°≤ aspect ≤270°). These observations pose challenges for modeling initiation of the debris flows. Effects of variable soil depth and properties, vegetation, and rainfall must be examined to explain the dominance of debris flows on south-facing slopes. Accounting for the small sizes and mixed swale and open-slope settings of source areas demands new approaches for resolving soil-depth and physical-properties variability. The low-moisture initial conditions require consideration of unsaturated zone effects. Ongoing fieldwork and computational modeling are aimed at addressing these challenges related to initiation of the September 2013 debris flows.
An efficient approach to the deployment of complex open source information systems
Cong, Truong Van Chi; Groeneveld, Eildert
2011-01-01
Complex open source information systems are usually implemented as component-based software to inherit the available functionality of existing software packages developed by third parties. Consequently, the deployment of these systems not only requires the installation of operating system, application framework and the configuration of services but also needs to resolve the dependencies among components. The problem becomes more challenging when the application must be installed and used on different platforms such as Linux and Windows. To address this, an efficient approach using the virtualization technology is suggested and discussed in this paper. The approach has been applied in our project to deploy a web-based integrated information system in molecular genetics labs. It is a low-cost solution to benefit both software developers and end-users. PMID:22102770
Do Open Source LMSs Support Personalization? A Comparative Evaluation
NASA Astrophysics Data System (ADS)
Kerkiri, Tania; Paleologou, Angela-Maria
A number of parameters that support the LMSs capabilities towards content personalization are presented and substantiated. These parameters constitute critical criteria for an exhaustive investigation of the personalization capabilities of the most popular open source LMSs. Results are comparatively shown and commented upon, thus highlighting a course of conduct for the implementation of new personalization methodologies for these LMSs, aligned at their existing infrastructure, to maintain support of the numerous educational institutions entrusting major part of their curricula to them. Meanwhile, new capabilities arise as drawn from a more efficient description of the existing resources -especially when organized into widely available repositories- that lead to qualitatively advanced learner-oriented courses which would ideally meet the challenge of combining personification of demand and personalization of thematic content at once.
Embracing Open Software Development in Solar Physics
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Christe, S.; Mueller, D.
2012-12-01
We discuss two ongoing software projects in solar physics that have adopted best practices of the open source software community. The first, the Helioviewer Project, is a powerful data visualization tool which includes online and Java interfaces inspired by Google Maps (tm). This effort allows users to find solar features and events of interest, and download the corresponding data. Having found data of interest, the user now has to analyze it. The dominant solar data analysis platform is an open-source library called SolarSoft (SSW). Although SSW itself is open-source, the programming language used is IDL, a proprietary language with licensing costs that are prohibative for many institutions and individuals. SSW is composed of a collection of related scripts written by missions and individuals for solar data processing and analysis, without any consistent data structures or common interfaces. Further, at the time when SSW was initially developed, many of the best software development processes of today (mirrored and distributed version control, unit testing, continuous integration, etc.) were not standard, and have not since been adopted. The challenges inherent in developing SolarSoft led to a second software project known as SunPy. SunPy is an open-source Python-based library which seeks to create a unified solar data analysis environment including a number of core datatypes such as Maps, Lightcurves, and Spectra which have consistent interfaces and behaviors. By taking advantage of the large and sophisticated body of scientific software already available in Python (e.g. SciPy, NumPy, Matplotlib), and by adopting many of the best practices refined in open-source software development, SunPy has been able to develop at a very rapid pace while still ensuring a high level of reliability. The Helioviewer Project and SunPy represent two pioneering technologies in solar physics - simple yet flexible data visualization and a powerful, new data analysis environment. We discuss the development of both these efforts and how they are beginning to influence the solar physics community.
Open Source Digital Image Management Took Us from Raging Rivers to Quiet Waters
ERIC Educational Resources Information Center
Dunlap, Isaac Hunter
2005-01-01
In this article, the author describes his experience when Kathy Nichols contacted him seeking suggestions, recommendations--really anything he could think of--that might help her seize control of a bewildering and rapidly surging torrent of digital image files. The challenges that Nichols described interested him, and he felt they might be…
Books from the Past: An E-Books Project at Culturenet Cymru
ERIC Educational Resources Information Center
Haarhoff, Leith
2005-01-01
Purpose: To describe the open-source solution developed by Culturenet Cymru, for the Welsh Books Council, for presenting digitised books and other printed material online. Design/methodology/approach: The challenges faced in the implementation of a pilot e-book collection of nine out-of-print books is described. Findings: The adoption of a number…
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik
2017-01-01
Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Bagayoko, Cheick-Oumar; Dufour, Jean-Charles; Chaacho, Saad; Bouhaddou, Omar; Fieschi, Marius
2010-04-16
We are currently witnessing a significant increase in use of Open Source tools in the field of health. Our study aims to research the potential of these software packages for developing countries. Our experiment was conducted at the Centre Hospitalier Mere Enfant in Mali. After reviewing several Open Source tools in the field of hospital information systems, Mediboard software was chosen for our study. To ensure the completeness of Mediboard in relation to the functionality required for a hospital information system, its features were compared to those of a well-defined comprehensive record management tool set up at the University Hospital "La Timone" of Marseilles in France. It was then installed on two Linux servers: a first server for testing and validation of different modules, and a second one for the deployed full implementation. After several months of use, we have evaluated the usability aspects of the system including feedback from end-users through a questionnaire. Initial results showed the potential of Open Source in the field of health IT for developing countries like Mali.Five main modules have been fully implemented: patient administrative and medical records management of hospital activities, tracking of practitioners' activities, infrastructure management and the billing system. This last component of the system has been fully developed by the local Mali team.The evaluation showed that the system is broadly accepted by all the users who participated in the study. 77% of the participants found the system useful; 85% found it easy; 100% of them believe the system increases the reliability of data. The same proportion encourages the continuation of the experiment and its expansion throughout the hospital. In light of the results, we can conclude that the objective of our study was reached. However, it is important to take into account the recommendations and the challenges discussed here to avoid several potential pitfalls specific to the context of Africa.Our future work will target the full integration of the billing module in Mediboard and an expanded implementation throughout the hospital.
2010-01-01
Background We are currently witnessing a significant increase in use of Open Source tools in the field of health. Our study aims to research the potential of these software packages for developing countries. Our experiment was conducted at the Centre Hospitalier Mere Enfant in Mali. Methods After reviewing several Open Source tools in the field of hospital information systems, Mediboard software was chosen for our study. To ensure the completeness of Mediboard in relation to the functionality required for a hospital information system, its features were compared to those of a well-defined comprehensive record management tool set up at the University Hospital "La Timone" of Marseilles in France. It was then installed on two Linux servers: a first server for testing and validation of different modules, and a second one for the deployed full implementation. After several months of use, we have evaluated the usability aspects of the system including feedback from end-users through a questionnaire. Results Initial results showed the potential of Open Source in the field of health IT for developing countries like Mali. Five main modules have been fully implemented: patient administrative and medical records management of hospital activities, tracking of practitioners' activities, infrastructure management and the billing system. This last component of the system has been fully developed by the local Mali team. The evaluation showed that the system is broadly accepted by all the users who participated in the study. 77% of the participants found the system useful; 85% found it easy; 100% of them believe the system increases the reliability of data. The same proportion encourages the continuation of the experiment and its expansion throughout the hospital. Conclusions In light of the results, we can conclude that the objective of our study was reached. However, it is important to take into account the recommendations and the challenges discussed here to avoid several potential pitfalls specific to the context of Africa. Our future work will target the full integration of the billing module in Mediboard and an expanded implementation throughout the hospital. PMID:20398366
Is Multitask Deep Learning Practical for Pharma?
Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay
2017-08-28
Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.
Leveraging human oversight and intervention in large-scale parallel processing of open-source data
NASA Astrophysics Data System (ADS)
Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.
2015-05-01
The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.
Search Analytics: Automated Learning, Analysis, and Search with Open Source
NASA Astrophysics Data System (ADS)
Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.
2016-12-01
The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82-84%, which could have tangible, direct downstream implications for crop protection. Automatically assimilating this information expedites and supplements human analysis, and, ultimately, Search Analytics and its foundation of open source tools will result in more efficient scientific investment and research.
Muinga, Naomi; Magare, Steve; Monda, Jonathan; Kamau, Onesmus; Houston, Stuart; Fraser, Hamish; Powell, John; English, Mike; Paton, Chris
2018-04-18
The Kenyan government, working with international partners and local organizations, has developed an eHealth strategy, specified standards, and guidelines for electronic health record adoption in public hospitals and implemented two major health information technology projects: District Health Information Software Version 2, for collating national health care indicators and a rollout of the KenyaEMR and International Quality Care Health Management Information Systems, for managing 600 HIV clinics across the country. Following these projects, a modified version of the Open Medical Record System electronic health record was specified and developed to fulfill the clinical and administrative requirements of health care facilities operated by devolved counties in Kenya and to automate the process of collating health care indicators and entering them into the District Health Information Software Version 2 system. We aimed to present a descriptive case study of the implementation of an open source electronic health record system in public health care facilities in Kenya. We conducted a landscape review of existing literature concerning eHealth policies and electronic health record development in Kenya. Following initial discussions with the Ministry of Health, the World Health Organization, and implementing partners, we conducted a series of visits to implementing sites to conduct semistructured individual interviews and group discussions with stakeholders to produce a historical case study of the implementation. This case study describes how consultants based in Kenya, working with developers in India and project stakeholders, implemented the new system into several public hospitals in a county in rural Kenya. The implementation process included upgrading the hospital information technology infrastructure, training users, and attempting to garner administrative and clinical buy-in for adoption of the system. The initial deployment was ultimately scaled back due to a complex mix of sociotechnical and administrative issues. Learning from these early challenges, the system is now being redesigned and prepared for deployment in 6 new counties across Kenya. Implementing electronic health record systems is a challenging process in high-income settings. In low-income settings, such as Kenya, open source software may offer some respite from the high costs of software licensing, but the familiar challenges of clinical and administration buy-in, the need to adequately train users, and the need for the provision of ongoing technical support are common across the North-South divide. Strategies such as creating local support teams, using local development resources, ensuring end user buy-in, and rolling out in smaller facilities before larger hospitals are being incorporated into the project. These are positive developments to help maintain momentum as the project continues. Further integration with existing open source communities could help ongoing development and implementations of the project. We hope this case study will provide some lessons and guidance for other challenging implementations of electronic health record systems as they continue across Africa. ©Naomi Muinga, Steve Magare, Jonathan Monda, Onesmus Kamau, Stuart Houston, Hamish Fraser, John Powell, Mike English, Chris Paton. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 18.04.2018.
DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration
Dryden, Michael D. M.; Wheeler, Aaron R.
2015-01-01
Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2011-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Innovative Retrofit Insulation Strategies for Concrete Masonry Foundations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huelman, P.; Goldberg, L.; Jacobson, R.
Basements in climates 6 and 7 can account for a fraction of a home's total heat loss when fully conditioned. Such foundations are a source of moisture, with convection in open block cavities redistributing water from the wall base, usually when heating. Even when block cavities are capped, the cold foundation concrete can act as a moisture source for wood rim joist components that are in contact with it. Because below-grade basements are increasingly used for habitable space, cold foundation walls pose challenges for moisture contribution, energy use, and occupant comfort.
ERIC Educational Resources Information Center
Butts, Stefani A.; Kayukwa, Annette; Langlie, Jake; Rodriguez, Violeta J.; Alcaide, Maria L.; Chitalu, Ndashi; Weiss, Stephen M.; Jones, Deborah L.
2018-01-01
In sub-Saharan Africa, young women are at the highest risk of HIV infection. Comprehensive sexuality education and open parent-child communication about sex have been shown to mitigate risky sexual practices associated with HIV. This study aimed to identify sources of HIV prevention knowledge among young women aged 10-14 years and community-based…
Executing SPARQL Queries over the Web of Linked Data
NASA Astrophysics Data System (ADS)
Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph
The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
NASA World Wind: Infrastructure for Spatial Data
NASA Technical Reports Server (NTRS)
Hogan, Patrick
2011-01-01
The world has great need for analysis of Earth observation data, be it climate change, carbon monitoring, disaster response, national defense or simply local resource management. To best provide for spatial and time-dependent information analysis, the world benefits from an open standards and open source infrastructure for spatial data. In the spirit of NASA's motto "for the benefit of all" NASA invites the world community to collaboratively advance this core technology. The World Wind infrastructure for spatial data both unites and challenges the world for innovative solutions analyzing spatial data while also allowing absolute command and control over any respective information exchange medium.
«Interventional Neuroradiology: a Neuroscience sub-specialty?»
Rodesch, Georges; Picard, Luc; Berenstein, Alex; Biondi, Alessandra; Bracard, Serge; Choi, In Sup; Feng, Ling; Hyogo, Toshio; LeFeuvre, David; Leonardi, Marco; Mayer, Thomas; Miyashi, Shigeru; Muto, Mario; Piske, Ronie; Pongpech, Sirintara; Reul, Jurgen; Söderman, Michael; Suh, Dae Chul; Tampieri, Donatella; Taylor, Allan; Terbrugge, Karel; Valavanis, Anton; van den Berg, René
2013-01-01
Summary Interventional Neuroradiology (INR) is not bound by the classical limits of a speciality, and is not restricted by standard formats of teaching and education. Open and naturally linked towards neurosciences, INR has become a unique source of novel ideas for research, development and progress allowing new and improved approaches to challenging pathologies resulting in better anatomo-clinical results. Opening INR to Neurosciences is the best way to keep it alive and growing. Anchored in Neuroradiology, at the crossroad of neurosciences, INR will further participate to progress and innovation as it has often been in the past. PMID:24355160
«Interventional Neuroradiology: a Neuroscience sub-specialty?»
Rodesch, Georges; Picard, Luc; Berenstein, Alex; Biondi, Alessandra; Bracard, Serge; Choi, In Sup; Feng, Ling; Hyogo, Toshio; LeFeuvre, David; Leonardi, Marco; Mayer, Thomas; Miyashi, Shigeru; Muto, Mario; Piske, Ronie; Pongpech, Sirintara; Reul, Jurgen; Soderman, Michael; Chuh, Dae Sul; Tampieri, Donatella; Taylor, Allan; Terbrugge, Karel; Valavanis, Anton; van den Berg, René
2013-01-01
Summary Interventional Neuroradiology (INR) is not bound by the classical limits of a speciality, and is not restricted by standard formats of teaching and education. Open and naturally linked towards neurosciences, INR has become a unique source of novel ideas for research, development and progress allowing new and improved approaches to challenging pathologies resulting in better anatomo-clinical results. Opening INR to Neurosciences is the best way to keep it alive and growing. Anchored in Neuroradiology, at the crossroad of neurosciences, INR will further participate to progress and innovation as it has often been in the past. PMID:24070073
A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows
NASA Astrophysics Data System (ADS)
Babin, B. L.; Hu, L.
2008-12-01
Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).
Harvest: a web-based biomedical data discovery and reporting application development platform.
Italia, Michael J; Pennington, Jeffrey W; Ruth, Byron; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; Miller, Jeffrey; White, Peter S
2013-01-01
Biomedical researchers share a common challenge of making complex data understandable and accessible. This need is increasingly acute as investigators seek opportunities for discovery amidst an exponential growth in the volume and complexity of laboratory and clinical data. To address this need, we developed Harvest, an open source framework that provides a set of modular components to aid the rapid development and deployment of custom data discovery software applications. Harvest incorporates visual representations of multidimensional data types in an intuitive, web-based interface that promotes a real-time, iterative approach to exploring complex clinical and experimental data. The Harvest architecture capitalizes on standards-based, open source technologies to address multiple functional needs critical to a research and development environment, including domain-specific data modeling, abstraction of complex data models, and a customizable web client.
Ising Processing Units: Potential and Challenges for Discrete Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffrin, Carleton James; Nagarajan, Harsha; Bent, Russell Whitford
The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one examplemore » of a commercially available Ising processing unit.« less
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
Girstmair, Johannes; Zakrzewski, Anne; Lapraz, François; Handberg-Thorsager, Mette; Tomancak, Pavel; Pitrone, Peter Gabriel; Simpson, Fraser; Telford, Maximilian J
2016-06-30
Selective plane illumination microscopy (SPIM a type of light-sheet microscopy) involves focusing a thin sheet of laser light through a specimen at right angles to the objective lens. As only the thin section of the specimen at the focal plane of the lens is illuminated, out of focus light is naturally absent and toxicity due to light (phototoxicity) is greatly reduced enabling longer term live imaging. OpenSPIM is an open access platform (Pitrone et al. 2013 and OpenSPIM.org) created to give new users step-by-step instructions on building a basic configuration of a SPIM microscope, which can in principle be adapted and upgraded to each laboratory's own requirements and budget. Here we describe our own experience with the process of designing, building, configuring and using an OpenSPIM for our research into the early development of the polyclad flatworm Maritigrella crozieri - a non-model animal. Our OpenSPIM builds on the standard design with the addition of two colour laser illumination for simultaneous detection of two probes/molecules and dual sided illumination, which provides more even signal intensity across a specimen. Our OpenSPIM provides high resolution 3d images and time lapse recordings, and we demonstrate the use of two colour lasers and the benefits of two color dual-sided imaging. We used our microscope to study the development of the embryo of the polyclad flatworm M. crozieri. The capabilities of our microscope are demonstrated by our ability to record the stereotypical spiral cleavage pattern of M. crozieri with high-speed multi-view time lapse imaging. 3D and 4D (3D + time) reconstruction of early development from these data is possible using image registration and deconvolution tools provided as part of the open source Fiji platform. We discuss our findings on the pros and cons of a self built microscope. We conclude that home-built microscopes, such as an OpenSPIM, together with the available open source software, such as MicroManager and Fiji, make SPIM accessible to anyone interested in having continuous access to their own light-sheet microscope. However, building an OpenSPIM is not without challenges and an open access microscope is a worthwhile, if significant, investment of time and money. Multi-view 4D microscopy is more challenging than we had expected. We hope that our experience gained during this project will help future OpenSPIM users with similar ambitions.
Blattmann, Peter; Heusel, Moritz; Aebersold, Ruedi
2016-01-01
SWATH-MS is an acquisition and analysis technique of targeted proteomics that enables measuring several thousand proteins with high reproducibility and accuracy across many samples. OpenSWATH is popular open-source software for peptide identification and quantification from SWATH-MS data. For downstream statistical and quantitative analysis there exist different tools such as MSstats, mapDIA and aLFQ. However, the transfer of data from OpenSWATH to the downstream statistical tools is currently technically challenging. Here we introduce the R/Bioconductor package SWATH2stats, which allows convenient processing of the data into a format directly readable by the downstream analysis tools. In addition, SWATH2stats allows annotation, analyzing the variation and the reproducibility of the measurements, FDR estimation, and advanced filtering before submitting the processed data to downstream tools. These functionalities are important to quickly analyze the quality of the SWATH-MS data. Hence, SWATH2stats is a new open-source tool that summarizes several practical functionalities for analyzing, processing, and converting SWATH-MS data and thus facilitates the efficient analysis of large-scale SWATH/DIA datasets.
Leveraging Open Standards and Technologies to Enhance Community Access to Earth Science Lidar Data
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Krishnan, S.; Cowart, C.; Baru, C.; Arrowsmith, R.
2011-12-01
Lidar (Light Detection and Ranging) data, collected from space, airborne and terrestrial platforms, have emerged as an invaluable tool for a variety of Earth science applications ranging from ice sheet monitoring to modeling of earth surface processes. However, lidar present a unique suite of challenges from the perspective of building cyberinfrastructure systems that enable the scientific community to access these valuable research datasets. Lidar data are typically characterized by millions to billions of individual measurements of x,y,z position plus attributes; these "raw" data are also often accompanied by derived raster products and are frequently terabytes in size. As a relatively new and rapidly evolving data collection technology, relevant open data standards and software projects are immature compared to those for other remote sensing platforms. The NSF-funded OpenTopography Facility project has developed an online lidar data access and processing system that co-locates data with on-demand processing tools to enable users to access both raw point cloud data as well as custom derived products and visualizations. OpenTopography is built on a Service Oriented Architecture (SOA) in which applications and data resources are deployed as standards compliant (XML and SOAP) Web services with the open source Opal Toolkit. To develop the underlying applications for data access, filtering and conversion, and various processing tasks, OpenTopography has heavily leveraged existing open source software efforts for both lidar and raster data. Operating on the de facto LAS binary point cloud format (maintained by ASPRS), open source libLAS and LASlib libraries provide OpenTopography data ingestion, query and translation capabilities. Similarly, raster data manipulation is performed through a suite of services built on the Geospatial Data Abstraction Library (GDAL). OpenTopography has also developed our own algorithm for high-performance gridding of lidar point cloud data, Points2Grid, and have released the code as an open source project. An emerging conversation that the lidar community and OpenTopography are actively engaged in is the need for open, community supported standards and metadata for both full waveform and terrestrial (waveform and discrete return) lidar data. Further, given the immature nature of many lidar data archives and limited online access to public domain data, there is an opportunity to develop interoperable data catalogs based on an open standard such as the OGC CSW specification to facilitate discovery and access to Earth science oriented lidar data.
A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs
NASA Astrophysics Data System (ADS)
Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.
2013-12-01
The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.
NASA Astrophysics Data System (ADS)
Friedrich, J.; Kressig, A.; Van Groenou, S.; McCormick, C.
2017-12-01
Challenge The lack of transparent, accessible, and centralized power sector data inhibits the ability to research the impact of the global power sector. information gaps for citizens, analysts, and decision makers worldwide create barriers to sustainable development efforts. The need for transparent, accessible, and centralized information is especially important to enhance the commitments outlined in the recently adopted Paris Agreement and Sustainable Development Goals. Offer Power Watch will address this challenge by creating a comprehensive, open-source platform on the world's power systems. The platform hosts data on 85% of global installed electrical capacity and for each power plant will include data points on installed capacity, fuel type, annual generation, commissioning year, with more characteristics like emissions, particulate matter, annual water demand and more added over time. Most of the data is reported from national level sources, but annual generation and other operational characteristiscs are estimated via Machine Learning modeling and remotely sensed data when not officially reported. In addition, Power Watch plans to provide a suite of tools that address specific decision maker needs, such as water risk assessments and air pollution modeling. Impact Through open data, the platform and its tools will allow reserachers to do more analysis of power sector impacts and perform energy modeling. It will help catalyze accountability for policy makers, businesses, and investors and will inform and drive the transition to a clean energy future while reaching development targets.
Certification of COTS Software in NASA Human Rated Flight Systems
NASA Technical Reports Server (NTRS)
Goforth, Andre
2012-01-01
Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA s own standards. For software that is from COTS sources, and, in general from 3rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3rd party software in safety critical systems and cover recent efforts to use COTS software in NASA s Multi-Purpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3rd party software.
Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo
2013-07-01
The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
Ratnam, Joseline; Zdrazil, Barbara; Digles, Daniela; Cuadrado-Rodriguez, Emiliano; Neefs, Jean-Marc; Tipney, Hannah; Siebes, Ronald; Waagmeester, Andra; Bradley, Glyn; Chau, Chau Han; Richter, Lars; Brea, Jose; Evelo, Chris T.; Jacoby, Edgar; Senger, Stefan; Loza, Maria Isabel; Ecker, Gerhard F.; Chichester, Christine
2014-01-01
Integration of open access, curated, high-quality information from multiple disciplines in the Life and Biomedical Sciences provides a holistic understanding of the domain. Additionally, the effective linking of diverse data sources can unearth hidden relationships and guide potential research strategies. However, given the lack of consistency between descriptors and identifiers used in different resources and the absence of a simple mechanism to link them, gathering and combining relevant, comprehensive information from diverse databases remains a challenge. The Open Pharmacological Concepts Triple Store (Open PHACTS) is an Innovative Medicines Initiative project that uses semantic web technology approaches to enable scientists to easily access and process data from multiple sources to solve real-world drug discovery problems. The project draws together sources of publicly-available pharmacological, physicochemical and biomolecular data, represents it in a stable infrastructure and provides well-defined information exploration and retrieval methods. Here, we highlight the utility of this platform in conjunction with workflow tools to solve pharmacological research questions that require interoperability between target, compound, and pathway data. Use cases presented herein cover 1) the comprehensive identification of chemical matter for a dopamine receptor drug discovery program 2) the identification of compounds active against all targets in the Epidermal growth factor receptor (ErbB) signaling pathway that have a relevance to disease and 3) the evaluation of established targets in the Vitamin D metabolism pathway to aid novel Vitamin D analogue design. The example workflows presented illustrate how the Open PHACTS Discovery Platform can be used to exploit existing knowledge and generate new hypotheses in the process of drug discovery. PMID:25522365
The State of Open Source Electronic Health Record Projects: A Software Anthropology Study
2017-01-01
Background Electronic health records (EHR) are a key tool in managing and storing patients’ information. Currently, there are over 50 open source EHR systems available. Functionality and usability are important factors for determining the success of any system. These factors are often a direct reflection of the domain knowledge and developers’ motivations. However, few published studies have focused on the characteristics of free and open source software (F/OSS) EHR systems and none to date have discussed the motivation, knowledge background, and demographic characteristics of the developers involved in open source EHR projects. Objective This study analyzed the characteristics of prevailing F/OSS EHR systems and aimed to provide an understanding of the motivation, knowledge background, and characteristics of the developers. Methods This study identified F/OSS EHR projects on SourceForge and other websites from May to July 2014. Projects were classified and characterized by license type, downloads, programming languages, spoken languages, project age, development status, supporting materials, top downloads by country, and whether they were “certified” EHRs. Health care F/OSS developers were also surveyed using an online survey. Results At the time of the assessment, we uncovered 54 open source EHR projects, but only four of them had been successfully certified under the Office of the National Coordinator for Health Information Technology (ONC Health IT) Certification Program. In the majority of cases, the open source EHR software was downloaded by users in the United States (64.07%, 148,666/232,034), underscoring that there is a significant interest in EHR open source applications in the United States. A survey of EHR open source developers was conducted and a total of 103 developers responded to the online questionnaire. The majority of EHR F/OSS developers (65.3%, 66/101) are participating in F/OSS projects as part of a paid activity and only 25.7% (26/101) of EHR F/OSS developers are, or have been, health care providers in their careers. In addition, 45% (45/99) of developers do not work in the health care field. Conclusion The research presented in this study highlights some challenges that may be hindering the future of health care F/OSS. A minority of developers have been health care professionals, and only 55% (54/99) work in the health care field. This undoubtedly limits the ability of functional design of F/OSS EHR systems from being a competitive advantage over prevailing commercial EHR systems. Open source software seems to be a significant interest to many; however, given that only four F/OSS EHR systems are ONC-certified, this interest is unlikely to yield significant adoption of these systems in the United States. Although the Health Information Technology for Economic and Clinical Health (HITECH) act was responsible for a substantial infusion of capital into the EHR marketplace, the lack of a corporate entity in most F/OSS EHR projects translates to a marginal capacity to market the respective F/OSS system and to navigate certification. This likely has further disadvantaged F/OSS EHR adoption in the United States. PMID:28235750
Opening of the South China Sea and Upwelling of the Hainan Plume
NASA Astrophysics Data System (ADS)
Yu, Mengming; Yan, Yi; Huang, Chi-Yue; Zhang, Xinchang; Tian, Zhixian; Chen, Wen-Huang; Santosh, M.
2018-03-01
Opening of the South China Sea and upwelling of the Hainan Plume are among the most challenging issues related to the tectonic evolution of East Asia. However, when and how the Hainan Plume affected the opening of the South China Sea remains unclear. Here we investigate the geochemical and isotopic features of the 25 Ma mid-ocean ridge basalt (MORB) in the Kenting Mélange, southern Taiwan, 16 Ma MORB drilled by the IODP Expedition 349, and 9 Ma ocean island basalt-type dredged seamount basalt. The 25 Ma MORBs reveal a less metasomatic depleted MORB mantle-like source. In contrast, the Miocene samples record progressive mantle enrichment and possibly signal the contribution of the Hainan Plume. We speculate that MORBs of the South China Sea which could have recorded plume-ridge source mixing perhaps appear since 23.8 Ma. On the contrary, the Paleocene-Eocene ocean island basalt-type intraplate volcanism of the South China continental margin is correlated to decompression melting of a passively upwelling fertile asthenosphere due to continental rifting.
The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository.
Clark, Kenneth; Vendt, Bruce; Smith, Kirk; Freymann, John; Kirby, Justin; Koppel, Paul; Moore, Stephen; Phillips, Stanley; Maffitt, David; Pringle, Michael; Tarbox, Lawrence; Prior, Fred
2013-12-01
The National Institutes of Health have placed significant emphasis on sharing of research data to support secondary research. Investigators have been encouraged to publish their clinical and imaging data as part of fulfilling their grant obligations. Realizing it was not sufficient to merely ask investigators to publish their collection of imaging and clinical data, the National Cancer Institute (NCI) created the open source National Biomedical Image Archive software package as a mechanism for centralized hosting of cancer related imaging. NCI has contracted with Washington University in Saint Louis to create The Cancer Imaging Archive (TCIA)-an open-source, open-access information resource to support research, development, and educational initiatives utilizing advanced medical imaging of cancer. In its first year of operation, TCIA accumulated 23 collections (3.3 million images). Operating and maintaining a high-availability image archive is a complex challenge involving varied archive-specific resources and driven by the needs of both image submitters and image consumers. Quality archives of any type (traditional library, PubMed, refereed journals) require management and customer service. This paper describes the management tasks and user support model for TCIA.
Standards-Based Procedural Phenotyping: The Arden Syntax on i2b2.
Mate, Sebastian; Castellanos, Ixchel; Ganslandt, Thomas; Prokosch, Hans-Ulrich; Kraus, Stefan
2017-01-01
Phenotyping, or the identification of patient cohorts, is a recurring challenge in medical informatics. While there are open source tools such as i2b2 that address this problem by providing user-friendly querying interfaces, these platforms lack semantic expressiveness to model complex phenotyping algorithms. The Arden Syntax provides procedural programming language construct, designed specifically for medical decision support and knowledge transfer. In this work, we investigate how language constructs of the Arden Syntax can be used for generic phenotyping. We implemented a prototypical tool to integrate i2b2 with an open source Arden execution environment. To demonstrate the applicability of our approach, we used the tool together with an Arden-based phenotyping algorithm to derive statistics about ICU-acquired hypernatremia. Finally, we discuss how the combination of i2b2's user-friendly cohort pre-selection and Arden's procedural expressiveness could benefit phenotyping.
NASA Astrophysics Data System (ADS)
Morin, Paul; Porter, Claire; Cloutier, Michael; Howat, Ian; Noh, Myoung-Jong; Willis, Michael; Kramer, WIlliam; Bauer, Greg; Bates, Brian; Williamson, Cathleen
2017-04-01
Surface topography is among the most fundamental data sets for geosciences, essential for disciplines ranging from glaciology to geodynamics. Two new projects are using sub-meter, commercial imagery licensed by the National Geospatial-Intelligence Agency and open source photogrammetry software to produce a time-tagged 2m posting elevation model of the Arctic and an 8m posting reference elevation model for the Antarctic. When complete, this publically available data will be at higher resolution than any elevation models that cover the entirety of the Western United States. These two polar projects are made possible due to three equally important factors: 1) open-source photogrammetry software, 2) petascale computing, and 3) sub-meter imagery licensed to the United States Government. Our talk will detail the technical challenges of using automated photogrammetry software; the rapid workflow evolution to allow DEM production; the task of deploying the workflow on one of the world's largest supercomputers; the trials of moving massive amounts of data, and the management strategies the team needed to solve in order to meet deadlines. Finally, we will discuss the implications of this type of collaboration for future multi-team use of leadership-class systems such as Blue Waters, and for further elevation mapping.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Advancing Innovation Through Collaboration: Implementation of the NASA Space Life Sciences Strategy
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; Richard, Elizabeth E.
2010-01-01
On October 18, 2010, the NASA Human Health and Performance center (NHHPC) was opened to enable collaboration among government, academic and industry members. Membership rapidly grew to 90 members (http://nhhpc.nasa.gov ) and members began identifying collaborative projects as detailed in this article. In addition, a first workshop in open collaboration and innovation was conducted on January 19, 2011 by the NHHPC resulting in additional challenges and projects for further development. This first workshop was a result of the SLSD successes in running open innovation challenges over the past two years. In 2008, the NASA Johnson Space Center, Space Life Sciences Directorate (SLSD) began pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical problems. From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external challenges were conducted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive platform, customized to NASA use, and promoted as NASA@Work. The results from the 34 challenges involved not only technical solutions that were reported previously at the 61st IAC, but also the formation of new collaborative relationships. For example, the TopCoder pilot was expanded by the NASA Space Operations Mission Directorate to the NASA Tournament Lab in collaboration with Harvard Business School and TopCoder. Building on these initial successes, the NHHPC workshop in January of 2011, and ongoing NHHPC member discussions, several important collaborations have been developed: (1) Space Act Agreement between NASA and GE for collaborative projects (2) NASA and academia for a Visual Impairment / Intracranial Hypertension summit (February 2011) (3) NASA and the DoD through the Defense Venture Catalyst Initiative (DeVenCI) for a technical needs workshop (June 2011) (4) NASA and the San Diego Zoo for a joint challenge in biomimicry (5) NASA and the FAA Center of Excellence for Commercial Space Flight for five collaborative projects (6) NASA and ESA for a Space Medicine Workshop (July 2011) (7) NASA and Tufts University for an education pilot (8) Establishment of long-term contracts (August 2011) to enable future challenges (9) Establishment of a new Center of Excellence for Collaborative Innovation (July 2011) for all federal agencies in the US
Code of Federal Regulations, 2010 CFR
2010-07-01
... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and... CATEGORIES National Emissions Standards for Hazardous Air Pollutants: Reinforced Plastic Composites... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and...
Open Source Cybersecurity for the 21st Century
2013-03-01
would eventually develop into Transmission Control Protocol/Internet Protocol ( TCP /IP) based on the following ground rules:12 Each distinct...The internet’s design permits any device conforming to modern day protocol standards ( TCP /IP being the most prevalent) to communicate across the... factors , coupled with the internet’s global reach and low cost of entry, are what make securing the cyber domain one of the most complex challenges we
Air Force Space Situational Awareness
2018-03-13
knowledge sharing vital to innovation. CyberWorx deliberately reaches across specialties to bring diverse perspectives to a problem in a non -threatening...design sprint to lay out a fast, viable path forward for the AF to enable better experimentation and unity of efforts toward the future. Participants in...in industry and even in their personal lives. CyberWorx was asked to address the challenges of incorporating non - traditional (open source, academic
Crowd-Sourced Help with Emergent Knowledge for Optimized Formal Verification (CHEKOFV)
2016-03-01
up game Binary Fission, which was deployed during Phase Two of CHEKOFV. Xylem: The Code of Plants is a casual game for players using mobile ...there are the design and engineering challenges of building a game infrastructure that integrates verification technology with crowd participation...the backend processes that annotate the originating software. Allowing players to construct their own equations opened up the flexibility to receive
2014-11-05
usable simulations. This procedure was to be tested using real-world data collected from open-source venues. The final system would support rapid...assess social change. Construct is an agent-based dynamic-network simulation system design to allow the user to assess the spread of information and...protest or violence. Technical Challenges Addressed Re‐use: Most agent-based simulation ( ABM ) in use today are one-off. In contrast, we
Open Access to research data - final perspectives from the RECODE project
NASA Astrophysics Data System (ADS)
Bigagli, Lorenzo; Sondervan, Jeroen
2015-04-01
Many networks, initiatives, and communities are addressing the key barriers to Open Access to data in scientific research. These organizations are typically heterogeneous and fragmented by discipline, location, sector (publishers, academics, data centers, etc.), as well as by other features. Besides, they often work in isolation, or with limited contacts with one another. The Policy RECommendations for Open Access to Research Data in Europe (RECODE) project, which will conclude in the first half of 2015, has scoped and addressed the challenges related to Open Access, dissemination and preservation of scientific data, leveraging the existing networks, initiatives, and communities. The overall objective of RECODE was to identify a series of targeted and over-arching policy recommendations for Open Access to European research data based on existing good practice. RECODE has undertaken a review of the existing state of the art and examined five case studies in different scientific disciplines: particle physics and astrophysics, clinical research, medicine and technical physiology (bioengineering), humanities (archaeology), and environmental sciences (Earth Observation). In particular for the latter discipline, GEOSS has been an optimal test bed for investigating the importance of technical and multidisciplinary interoperability, and what the challenges are in sharing and providing Open Access to research data from a variety of sources, and in a variety of formats. RECODE has identified five main technological and infrastructural challenges: • Heterogeneity - relates to interoperability, usability, accessibility, discoverability; • Sustainability - relates to obsolescence, curation, updates/upgrades, persistence, preservation; • Volume - also related to Big Data, which is somehow implied by Open Data; in our context, it relates to discoverability, accessibility (indexing), bandwidth, storage, scalability, energy footprint; • Quality - relates to completeness, description (metadata), usability, data (peer) review; • Security - relates to the technical aspects of policy enforcement, such the AAA-protocol for authentication, authorization and auditing/accounting, privacy issues, etc. RECODE has also focused on the identification of stakeholder values relevant to Open Access to research data, as well as on policy, legal, and institutional aspects. All these issues are of immediate relevance for the whole scientific ecosystem, including researchers, as data producers/users, as well as publishers and libraries, as means for data dissemination and management.
MetaLIMS, a simple open-source laboratory information management system for small metagenomic labs
Gaultier, Nicolas Paul Eugène; Miller, Dana; Purbojati, Rikky Wenang; Lauro, Federico M.
2017-01-01
Abstract Background: As the cost of sequencing continues to fall, smaller groups increasingly initiate and manage larger sequencing projects and take on the complexity of data storage for high volumes of samples. This has created a need for low-cost laboratory information management systems (LIMS) that contain flexible fields to accommodate the unique nature of individual labs. Many labs do not have a dedicated information technology position, so LIMS must also be easy to setup and maintain with minimal technical proficiency. Findings: MetaLIMS is a free and open-source web-based application available via GitHub. The focus of MetaLIMS is to store sample metadata prior to sequencing and analysis pipelines. Initially designed for environmental metagenomics labs, in addition to storing generic sample collection information and DNA/RNA processing information, the user can also add fields specific to the user's lab. MetaLIMS can also produce a basic sequencing submission form compatible with the proprietary Clarity LIMS system used by some sequencing facilities. To help ease the technical burden associated with web deployment, MetaLIMS options the use of commercial web hosting combined with MetaLIMS bash scripts for ease of setup. Conclusions: MetaLIMS overcomes key challenges common in LIMS by giving labs access to a low-cost and open-source tool that also has the flexibility to meet individual lab needs and an option for easy deployment. By making the web application open source and hosting it on GitHub, we hope to encourage the community to build upon MetaLIMS, making it more robust and tailored to the needs of more researchers. PMID:28430964
MetaLIMS, a simple open-source laboratory information management system for small metagenomic labs.
Heinle, Cassie Elizabeth; Gaultier, Nicolas Paul Eugène; Miller, Dana; Purbojati, Rikky Wenang; Lauro, Federico M
2017-06-01
As the cost of sequencing continues to fall, smaller groups increasingly initiate and manage larger sequencing projects and take on the complexity of data storage for high volumes of samples. This has created a need for low-cost laboratory information management systems (LIMS) that contain flexible fields to accommodate the unique nature of individual labs. Many labs do not have a dedicated information technology position, so LIMS must also be easy to setup and maintain with minimal technical proficiency. MetaLIMS is a free and open-source web-based application available via GitHub. The focus of MetaLIMS is to store sample metadata prior to sequencing and analysis pipelines. Initially designed for environmental metagenomics labs, in addition to storing generic sample collection information and DNA/RNA processing information, the user can also add fields specific to the user's lab. MetaLIMS can also produce a basic sequencing submission form compatible with the proprietary Clarity LIMS system used by some sequencing facilities. To help ease the technical burden associated with web deployment, MetaLIMS options the use of commercial web hosting combined with MetaLIMS bash scripts for ease of setup. MetaLIMS overcomes key challenges common in LIMS by giving labs access to a low-cost and open-source tool that also has the flexibility to meet individual lab needs and an option for easy deployment. By making the web application open source and hosting it on GitHub, we hope to encourage the community to build upon MetaLIMS, making it more robust and tailored to the needs of more researchers. © The Authors 2017. Published by Oxford University Press.
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
Building cell models and simulations from microscope images.
Murphy, Robert F
2016-03-01
The use of fluorescence microscopy has undergone a major revolution over the past twenty years, both with the development of dramatic new technologies and with the widespread adoption of image analysis and machine learning methods. Many open source software tools provide the ability to use these methods in a wide range of studies, and many molecular and cellular phenotypes can now be automatically distinguished. This article presents the next major challenge in microscopy automation, the creation of accurate models of cell organization directly from images, and reviews the progress that has been made towards this challenge. Copyright © 2015 Elsevier Inc. All rights reserved.
TOPSAN: a dynamic web database for structural genomics.
Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John
2011-01-01
The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.
New challenges and opportunities for industrial biotechnology.
Chen, Guo-Qiang
2012-08-20
Industrial biotechnology has not developed as fast as expected due to some challenges including the emergences of alternative energy sources, especially shale gas, natural gas hydrate (or gas hydrate) and sand oil et al. The weaknesses of microbial or enzymatic processes compared with the chemical processing also make industrial biotech products less competitive with the chemical ones. However, many opportunities are still there if industrial biotech processes can be as similar as the chemical ones. Taking advantages of the molecular biology and synthetic biology methods as well as changing process patterns, we can develop bioprocesses as competitive as chemical ones, these including the minimized cells, open and continuous fermentation processes et al.
The State of Open Source Electronic Health Record Projects: A Software Anthropology Study.
Alsaffar, Mona; Yellowlees, Peter; Odor, Alberto; Hogarth, Michael
2017-02-24
Electronic health records (EHR) are a key tool in managing and storing patients' information. Currently, there are over 50 open source EHR systems available. Functionality and usability are important factors for determining the success of any system. These factors are often a direct reflection of the domain knowledge and developers' motivations. However, few published studies have focused on the characteristics of free and open source software (F/OSS) EHR systems and none to date have discussed the motivation, knowledge background, and demographic characteristics of the developers involved in open source EHR projects. This study analyzed the characteristics of prevailing F/OSS EHR systems and aimed to provide an understanding of the motivation, knowledge background, and characteristics of the developers. This study identified F/OSS EHR projects on SourceForge and other websites from May to July 2014. Projects were classified and characterized by license type, downloads, programming languages, spoken languages, project age, development status, supporting materials, top downloads by country, and whether they were "certified" EHRs. Health care F/OSS developers were also surveyed using an online survey. At the time of the assessment, we uncovered 54 open source EHR projects, but only four of them had been successfully certified under the Office of the National Coordinator for Health Information Technology (ONC Health IT) Certification Program. In the majority of cases, the open source EHR software was downloaded by users in the United States (64.07%, 148,666/232,034), underscoring that there is a significant interest in EHR open source applications in the United States. A survey of EHR open source developers was conducted and a total of 103 developers responded to the online questionnaire. The majority of EHR F/OSS developers (65.3%, 66/101) are participating in F/OSS projects as part of a paid activity and only 25.7% (26/101) of EHR F/OSS developers are, or have been, health care providers in their careers. In addition, 45% (45/99) of developers do not work in the health care field. The research presented in this study highlights some challenges that may be hindering the future of health care F/OSS. A minority of developers have been health care professionals, and only 55% (54/99) work in the health care field. This undoubtedly limits the ability of functional design of F/OSS EHR systems from being a competitive advantage over prevailing commercial EHR systems. Open source software seems to be a significant interest to many; however, given that only four F/OSS EHR systems are ONC-certified, this interest is unlikely to yield significant adoption of these systems in the United States. Although the Health Information Technology for Economic and Clinical Health (HITECH) act was responsible for a substantial infusion of capital into the EHR marketplace, the lack of a corporate entity in most F/OSS EHR projects translates to a marginal capacity to market the respective F/OSS system and to navigate certification. This likely has further disadvantaged F/OSS EHR adoption in the United States. ©Mona Alsaffar, Peter Yellowlees, Alberto Odor, Michael Hogarth. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 24.02.2017.
Open source and healthcare in Europe - time to put leading edge ideas into practice.
Murray, Peter J; Wright, Graham; Karopka, Thomas; Betts, Helen; Orel, Andrej
2009-01-01
Free/Libre and Open Source Software (FLOSS) is a process of software development, a method of licensing and a philosophy. Although FLOSS plays a significant role in several market areas, the impact in the health care arena is still limited. FLOSS is promoted as one of the most effective means for overcoming fragmentation in the health care sector and providing a basis for more efficient, timely and cost effective health care provision. The 2008 European Federation for Medical Informatics (EFMI) Special Topic Conference (STC) explored a range of current and future issues related to FLOSS in healthcare (FLOSS-HC). In particular, there was a focus on health records, ubiquitous computing, knowledge sharing, and current and future applications. Discussions resulted in a list of main barriers and challenges for use of FLOSS-HC. Based on the outputs of this event, the 2004 Open Steps events and subsequent workshops at OSEHC2009 and Med-e-Tel 2009, a four-step strategy has been proposed for FLOSS-HC: 1) a FLOSS-HC inventory; 2) a FLOSS-HC collaboration platform, use case database and knowledge base; 3) a worldwide FLOSS-HC network; and 4) FLOSS-HC dissemination activities. The workshop will further refine this strategy and elaborate avenues for FLOSS-HC from scientific, business and end-user perspectives. To gain acceptance by different stakeholders in the health care industry, different activities have to be conducted in collaboration. The workshop will focus on the scientific challenges in developing methodologies and criteria to support FLOSS-HC in becoming a viable alternative to commercial and proprietary software development and deployment.
The Emergence of Open-Source Software in China
ERIC Educational Resources Information Center
Pan, Guohua; Bonk, Curtis J.
2007-01-01
The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…
A Study of Clinically Related Open Source Software Projects
Hogarth, Michael A.; Turner, Stuart
2005-01-01
Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the ‘fit’ of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056
ERIC Educational Resources Information Center
Krishnamurthy, M.
2008-01-01
Purpose: The purpose of this paper is to describe the open access and open source movement in the digital library world. Design/methodology/approach: A review of key developments in the open access and open source movement is provided. Findings: Open source software and open access to research findings are of great use to scholars in developing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Gregory S.; Nickless, William K.; Thiede, David R.
Enterprise level cyber security requires the deployment, operation, and monitoring of many sensors across geographically dispersed sites. Communicating with the sensors to gather data and control behavior is a challenging task when the number of sensors is rapidly growing. This paper describes the system requirements, design, and implementation of T3, the third generation of our transport software that performs this task. T3 relies on open source software and open Internet standards. Data is encoded in MIME format messages and transported via NNTP, which provides scalability. OpenSSL and public key cryptography are used to secure the data. Robustness and ease ofmore » development are increased by defining an internal cryptographic API, implemented by modules in C, Perl, and Python. We are currently using T3 in a production environment. It is freely available to download and use for other projects.« less
Integrated Multidisciplinary Optimization Objects
NASA Technical Reports Server (NTRS)
Alston, Katherine
2014-01-01
OpenMDAO is an open-source MDAO framework. It is used to develop an integrated analysis and design environment for engineering challenges. This Phase II project integrated additional modules and design tools into OpenMDAO to perform discipline-specific analysis across multiple flight regimes at varying levels of fidelity. It also showcased a refined system architecture that allows the system to be less customized to a specific configuration (i.e., system and configuration separation). By delivering a capable and validated MDAO system along with a set of example applications to be used as a template for future users, this work greatly expands NASA's high-fidelity, physics-based MDAO capabilities and enables the design of revolutionary vehicles in a cost-effective manner. This proposed work complements M4 Engineering's expertise in developing modeling and simulation toolsets that solve relevant subsonic, supersonic, and hypersonic demonstration applications.
New Open-Source Version of FLORIS Released | News | NREL
New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL
The Mock LISA Data Challenges: History, Status, Prospects
NASA Technical Reports Server (NTRS)
Vallisneri, Michele; Babak, Stas; Baker, John; Benacquista, Matt; Cornish, Neil; Crowder, Jeff; Cutler, Curt; Larson, Shane; Littenberg, Tyson; Porter, Edward;
2007-01-01
This slide presentation reviews the importance for the Mock LISA Data Challenges (MLDC). Laser Interferometer Space Antenna (LISA) is a gravitational wave (GW) observatory that will return data such that data analysis is integral to the measurement concept. Further rationale of the MLDC are to kickstart the development of a LISA data-analysis computational infrastructure, and to encourage, track, and compare progress in LISA data-analysis development in the open community. The MLDCs is a coordinated, voluntary effort in GW community, that will periodically issue datasets with synthetic noise and GW signals from sources of undisclosed parameters; increasing difficulty. The challenge participants return parameter estimates and descriptions of search methods. Some of the challenges and the resultant entries are reviewed. The aim is to show that LISA data analysis is possible, and to develop new techniques, using multiple international teams for the development of LISA core analysis tools
NASA Astrophysics Data System (ADS)
Hund, S. V.; Johnson, M. S.; Steyn, D. G.; Keddie, T.; Morillas, L.
2015-12-01
Water supply is highly disputed in the tropics of northwestern Costa Rica where rainfall exhibits high seasonal variability and long annual dry seasons. Water shortages are common during the dry season, and water conflicts emerge between domestic water users, intensively irrigated agriculture, the tourism industry, and ecological flows. Climate change may further increase the variability of precipitation and the risk for droughts, and pose challenges for small rural agricultural communities experiencing water stress. To adapt to seasonal droughts and improve resilience of communities to future changes, it is essential to increase understanding of interactions between components of the coupled hydrological-social system. Yet, hydrological monitoring and data on water use within developing countries of the humid tropics is limited. To address these challenges and contribute to extended monitoring networks, low-cost and open-source monitoring platforms were developed based off Arduino microelectronic boards and software and combined with hydrological sensors to monitor river stage and groundwater levels in two watersheds of Guanacaste, Costa Rica. Hydrologic monitoring stations are located in remote locations and powered by solar panels. Monitoring efforts were made possible through collaboration with local rural communities, and complemented with a mix of digitized water extraction data and community water use narratives to increase understanding of water use and challenges. We will present the development of the Arduino logging system, results of water supply in relation to water use for both the wet and dry season, and discuss these results within a socio-hydrological system context.
SemMat: Federated Semantic Services Platform for Open materials Science and Engineering
2017-01-01
identified the following two important tasks to remedy the data heterogeneity challenge to promote data integration: (1) creating the semantic...sourced from the structural and bio -materials domains. For structural materials data, we reviewed and used MIL-HDBK-5J [11] and MIL-HDBK-17. Furthermore...documents about composite materials provided by our domain expert. Based on the suggestions given by domain experts in bio -materials, the following
2011-09-01
solutions to address these important challenges . The Air Force is seeking innovative architectures to process and store massive data sets in a flexible...Google Earth, the Video LAN Client ( VLC ) media player, and the Environmental Systems Research Institute corporation‘s (ESRI) ArcGIS product — to...Earth, Quantum GIS, VLC Media Player, NASA WorldWind, ESRI ArcGIS and many others. Open source GIS and media visualization software can also be
2002-06-01
Act PURPA Public Utilities Regulatory Policies Act QF qualifying facility RTO regional transmission organization Page 1 GAO-02-656 Energy Markets June...alternative sources of power and energy efficiency. The Public Utility Regulatory Policies Act of 1978 ( PURPA ) was enacted, in part, to augment electric...requirements.5 More significantly, by opening wholesale power markets to nonutility producers of electricity, PURPA laid the groundwork for increased competition
Fehr, M
2014-09-01
Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.
Dinov, Ivo D
2016-01-01
Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, P; Varian Medical Systems, Palo Alto, CA; Lewis, J
2014-06-15
Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slotsmore » mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments and collaboration with other researchers. I am an employee of Varian Medical Systems.« less
Mid-infrared coincidence measurements based on intracavity frequency conversion
NASA Astrophysics Data System (ADS)
Piccione, S.; Mancinelli, M.; Trenti, A.; Fontana, G.; Dam, J.; Tidemand-Lichtenberg, P.; Pedersen, C.; Pavesi, L.
2018-02-01
In the last years, the Mid Infrared (MIR) spectral region has attracted the attention of many areas of science and technology, opening the way to important applications, such as molecular imaging, remote sensing, free- space communication and environmental monitoring. However, the development of new sources of light, such as quantum cascade laser, was not followed by an adequate improvement in the MIR detection system, able to exceed the current challenges. Here we demonstrate the single-photon counting capability of a new detection system, based on efficient up-converter modules, by proving the correlated nature of twin photons pairs at about 3.1μm, opening the way to the extension of quantum optics experiments in the MIR.
Enabling a new Paradigm to Address Big Data and Open Science Challenges
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan; Fisher, Ward
2017-04-01
Data are not only the lifeblood of the geosciences but they have become the currency of the modern world in science and society. Rapid advances in computing, communi¬cations, and observational technologies — along with concomitant advances in high-resolution modeling, ensemble and coupled-systems predictions of the Earth system — are revolutionizing nearly every aspect of our field. Modern data volumes from high-resolution ensemble prediction/projection/simulation systems and next-generation remote-sensing systems like hyper-spectral satellite sensors and phased-array radars are staggering. For example, CMIP efforts alone will generate many petabytes of climate projection data for use in assessments of climate change. And NOAA's National Climatic Data Center projects that it will archive over 350 petabytes by 2030. For researchers and educators, this deluge and the increasing complexity of data brings challenges along with the opportunities for discovery and scientific breakthroughs. The potential for big data to transform the geosciences is enormous, but realizing the next frontier depends on effectively managing, analyzing, and exploiting these heterogeneous data sources, extracting knowledge and useful information from heterogeneous data sources in ways that were previously impossible, to enable discoveries and gain new insights. At the same time, there is a growing focus on the area of "Reproducibility or Replicability in Science" that has implications for Open Science. The advent of cloud computing has opened new avenues for not only addressing both big data and Open Science challenges to accelerate scientific discoveries. However, to successfully leverage the enormous potential of cloud technologies, it will require the data providers and the scientific communities to develop new paradigms to enable next-generation workflows and transform the conduct of science. Making data readily available is a necessary but not a sufficient condition. Data providers also need to give scientists an ecosystem that includes data, tools, workflows and other services needed to perform analytics, integration, interpretation, and synthesis - all in the same environment or platform. Instead of moving data to processing systems near users, as is the tradition, the cloud permits one to bring processing, computing, analysis and visualization to data - so called data proximate workbench capabilities, also known as server-side processing. In this talk, I will present the ongoing work at Unidata to facilitate a new paradigm for doing science by offering a suite of tools, resources, and platforms to leverage cloud technologies for addressing both big data and Open Science/reproducibility challenges. That work includes the development and deployment of new protocols for data access and server-side operations and Docker container images of key applications, JupyterHub Python notebook tools, and cloud-based analysis and visualization capability via the CloudIDV tool to enable reproducible workflows and effectively use the accessed data.
Rogasch, Nigel C; Sullivan, Caley; Thomson, Richard H; Rose, Nathan S; Bailey, Neil W; Fitzgerald, Paul B; Farzan, Faranak; Hernandez-Pavon, Julio C
2017-02-15
The concurrent use of transcranial magnetic stimulation with electroencephalography (TMS-EEG) is growing in popularity as a method for assessing various cortical properties such as excitability, oscillations and connectivity. However, this combination of methods is technically challenging, resulting in artifacts both during recording and following typical EEG analysis methods, which can distort the underlying neural signal. In this article, we review the causes of artifacts in EEG recordings resulting from TMS, as well as artifacts introduced during analysis (e.g. as the result of filtering over high-frequency, large amplitude artifacts). We then discuss methods for removing artifacts, and ways of designing pipelines to minimise analysis-related artifacts. Finally, we introduce the TMS-EEG signal analyser (TESA), an open-source extension for EEGLAB, which includes functions that are specific for TMS-EEG analysis, such as removing and interpolating the TMS pulse artifact, removing and minimising TMS-evoked muscle activity, and analysing TMS-evoked potentials. The aims of TESA are to provide users with easy access to current TMS-EEG analysis methods and to encourage direct comparisons of these methods and pipelines. It is hoped that providing open-source functions will aid in both improving and standardising analysis across the field of TMS-EEG research. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Wake Flow Simulation of a Vertical Axis Wind Turbine Under the Influence of Wind Shear
NASA Astrophysics Data System (ADS)
Mendoza, Victor; Goude, Anders
2017-05-01
The current trend of the wind energy industry aims for large scale turbines installed in wind farms. This brings a renewed interest in vertical axis wind turbines (VAWTs) since they have several advantages over the traditional Horizontal Axis Wind Tubines (HAWTs) for mitigating the new challenges. However, operating VAWTs are characterized by complex aerodynamics phenomena, presenting considerable challenges for modeling tools. An accurate and reliable simulation tool for predicting the interaction between the obtained wake of an operating VAWT and the flow in atmospheric open sites is fundamental for optimizing the design and location of wind energy facility projects. The present work studies the wake produced by a VAWT and how it is affected by the surface roughness of the terrain, without considering the effects of the ambient turbulence intensity. This study was carried out using an actuator line model (ALM), and it was implemented using the open-source CFD library OpenFOAM to solve the governing equations and to compute the resulting flow fields. An operational H-shaped VAWT model was tested, for which experimental activity has been performed at an open site north of Uppsala-Sweden. Different terrains with similar inflow velocities have been evaluated. Simulated velocity and vorticity of representative sections have been analyzed. Numerical results were validated using normal forces measurements, showing reasonable agreement.
The Experiment Factory: Standardizing Behavioral Experiments.
Sochat, Vanessa V; Eisenberg, Ian W; Enkavi, A Zeynep; Li, Jamie; Bissett, Patrick G; Poldrack, Russell A
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms.
The Experiment Factory: Standardizing Behavioral Experiments
Sochat, Vanessa V.; Eisenberg, Ian W.; Enkavi, A. Zeynep; Li, Jamie; Bissett, Patrick G.; Poldrack, Russell A.
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms. PMID:27199843
Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.
Zappia, Luke; Phipson, Belinda; Oshlack, Alicia
2018-06-25
As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Oxoli, D.; Zurbarán, M. A.
2016-06-01
During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) - corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users' slow mobility preferences as well as places of interest at a regional scale.
ERIC Educational Resources Information Center
Voyles, Bennett
2007-01-01
People know about the Sakai Project (open source course management system); they may even know about Kuali (open source financials). So, what is the next wave in open source software? This article discusses business intelligence (BI) systems. Though open source BI may still be only a rumor in most campus IT departments, some brave early adopters…
The Commercial Open Source Business Model
NASA Astrophysics Data System (ADS)
Riehle, Dirk
Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.
Special population planner 4 : an open source release.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuiper, J.; Metz, W.; Tanzman, E.
2008-01-01
Emergencies like Hurricane Katrina and the recent California wildfires underscore the critical need to meet the complex challenge of planning for individuals with special needs and for institutionalized special populations. People with special needs and special populations often have difficulty responding to emergencies or taking protective actions, and emergency responders may be unaware of their existence and situations during a crisis. Special Population Planner (SPP) is an ArcGIS-based emergency planning system released as an open source product. SPP provides for easy production of maps, reports, and analyses to develop and revise emergency response plans. It includes tools to manage amore » voluntary registry of data for people with special needs, integrated links to plans and documents, tools for response planning and analysis, preformatted reports and maps, and data on locations of special populations, facility and resource characteristics, and contacts. The system can be readily adapted for new settings without programming and is broadly applicable. Full documentation and a demonstration database are included in the release.« less
Hawkeye and AMOS: visualizing and assessing the quality of genome assemblies
Schatz, Michael C.; Phillippy, Adam M.; Sommer, Daniel D.; Delcher, Arthur L.; Puiu, Daniela; Narzisi, Giuseppe; Salzberg, Steven L.; Pop, Mihai
2013-01-01
Since its launch in 2004, the open-source AMOS project has released several innovative DNA sequence analysis applications including: Hawkeye, a visual analytics tool for inspecting the structure of genome assemblies; the Assembly Forensics and FRCurve pipelines for systematically evaluating the quality of a genome assembly; and AMOScmp, the first comparative genome assembler. These applications have been used to assemble and analyze dozens of genomes ranging in complexity from simple microbial species through mammalian genomes. Recent efforts have been focused on enhancing support for new data characteristics brought on by second- and now third-generation sequencing. This review describes the major components of AMOS in light of these challenges, with an emphasis on methods for assessing assembly quality and the visual analytics capabilities of Hawkeye. These interactive graphical aspects are essential for navigating and understanding the complexities of a genome assembly, from the overall genome structure down to individual bases. Hawkeye and AMOS are available open source at http://amos.sourceforge.net. PMID:22199379
A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data
Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; ...
2016-01-01
Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less
New challenges and opportunities for industrial biotechnology
2012-01-01
Industrial biotechnology has not developed as fast as expected due to some challenges including the emergences of alternative energy sources, especially shale gas, natural gas hydrate (or gas hydrate) and sand oil et al. The weaknesses of microbial or enzymatic processes compared with the chemical processing also make industrial biotech products less competitive with the chemical ones. However, many opportunities are still there if industrial biotech processes can be as similar as the chemical ones. Taking advantages of the molecular biology and synthetic biology methods as well as changing process patterns, we can develop bioprocesses as competitive as chemical ones, these including the minimized cells, open and continuous fermentation processes et al. PMID:22905695
Visualization, documentation, analysis, and communication of large scale gene regulatory networks
Longabaugh, William J.R.; Davidson, Eric H.; Bolouri, Hamid
2009-01-01
Summary Genetic regulatory networks (GRNs) are complex, large-scale, and spatially and temporally distributed. These characteristics impose challenging demands on computational GRN modeling tools, and there is a need for custom modeling tools. In this paper, we report on our ongoing development of BioTapestry, an open source, freely available computational tool designed specifically for GRN modeling. We also outline our future development plans, and give some examples of current applications of BioTapestry. PMID:18757046
Strategic Landpower and a Resurgent Russia: An Operational Approach to Deterrence
2016-05-01
conducted by the author on November 4, 2015. 58. Nathan Friesel, “Hybrid Warfare; Four Challenges,” lec- ture to the Defense Advanced Research ... research discussions with NATO International staff for policy member and a NATO staff officer on November 3 and 4, 2015. 35 61. Colby and Solomon , p. 23... experimental , classroom, or field settings. The CBCC uses open source data to compute platform combat values, then, groups them into units. Opposing units
Gabbard, Joseph L.; Shukla, Maulik; Sobral, Bruno
2010-01-01
Systems biology and infectious disease (host-pathogen-environment) research and development is becoming increasingly dependent on integrating data from diverse and dynamic sources. Maintaining integrated resources over long periods of time presents distinct challenges. This paper describes experiences and lessons learned from integrating data in two five-year projects focused on pathosystems biology: the Pathosystems Resource Integration Center (PATRIC, http://patric.vbi.vt.edu/), with a goal of developing bioinformatics resources for the research and countermeasures development communities based on genomics data, and the Resource Center for Biodefense Proteomics Research (RCBPR, http://www.proteomicsresource.org/), with a goal of developing resources based on the experiment data such as microarray and proteomics data from diverse sources and technologies. Some challenges include integrating genomic sequence and experiment data, data synchronization, data quality control, and usability engineering. We present examples of a variety of data integration problems drawn from our experiences with PATRIC and RBPRC, as well as open research questions related to long term sustainability, and describe the next steps to meeting these challenges. Novel contributions of this work include (1) an approach for addressing discrepancies between experiment results and interpreted results and (2) expanding the range of data integration techniques to include usability engineering at the presentation level. PMID:20491070
Schäfer, Christian; Schmidt, Alexander H; Sauter, Jürgen
2017-05-30
Knowledge of HLA haplotypes is helpful in many settings as disease association studies, population genetics, or hematopoietic stem cell transplantation. Regarding the recruitment of unrelated hematopoietic stem cell donors, HLA haplotype frequencies of specific populations are used to optimize both donor searches for individual patients and strategic donor registry planning. However, the estimation of haplotype frequencies from HLA genotyping data is challenged by the large amount of genotype data, the complex HLA nomenclature, and the heterogeneous and ambiguous nature of typing records. To meet these challenges, we have developed the open-source software Hapl-o-Mat. It estimates haplotype frequencies from population data including an arbitrary number of loci using an expectation-maximization algorithm. Its key features are the processing of different HLA typing resolutions within a given population sample and the handling of ambiguities recorded via multiple allele codes or genotype list strings. Implemented in C++, Hapl-o-Mat facilitates efficient haplotype frequency estimation from large amounts of genotype data. We demonstrate its accuracy and performance on the basis of artificial and real genotype data. Hapl-o-Mat is a versatile and efficient software for HLA haplotype frequency estimation. Its capability of processing various forms of HLA genotype data allows for a straightforward haplotype frequency estimation from typing records usually found in stem cell donor registries.
NASA Astrophysics Data System (ADS)
Udell, C.; Selker, J. S.
2017-12-01
The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.
Open-source software: not quite endsville.
Stahl, Matthew T
2005-02-01
Open-source software will never achieve ubiquity. There are environments in which it simply does not flourish. By its nature, open-source development requires free exchange of ideas, community involvement, and the efforts of talented and dedicated individuals. However, pressures can come from several sources that prevent this from happening. In addition, openness and complex licensing issues invite misuse and abuse. Care must be taken to avoid the pitfalls of open-source software.
Developing an Open Source Option for NASA Software
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Parks, John W. (Technical Monitor)
2003-01-01
We present arguments in favor of developing an Open Source option for NASA software; in particular we discuss how Open Source is compatible with NASA's mission. We compare and contrast several of the leading Open Source licenses, and propose one - the Mozilla license - for use by NASA. We also address some of the related issues for NASA with respect to Open Source. In particular, we discuss some of the elements in the External Release of NASA Software document (NPG 2210.1A) that will likely have to be changed in order to make Open Source a reality withm the agency.
Teaching Introductory Astronomy "Open and Out" & Looking Forward to the 2017 Solar Eclipse
NASA Astrophysics Data System (ADS)
Chu, I.-Wen Mike; Cronkhite, Jeff
2016-01-01
We present a new effort on teaching introductory astronomy addressing the specific challenges facing small colleges including limited resources, changing generational behavior and new technological trends. The approach adopts open source solutions into the developmental learning materials aiming for standardization and wide-scale applicability. In addition we utilize events and resources outside classroom into the learning. Among examples of the development are laboratory exercises based on the planetarium software Stellarium and remediation exercises using Khan Academy instructional videos. As the eventual goal is to move toward greater autonomy the cycles of improvement necessarily require student feedback in an entirely different instructional style based on egalitarian dialogues. We highlight a laboratory exercise on Earth-Moon distance estimation using parallax of the upcoming 2017 solar eclipse to illustrate the "open and out" philosophy. Achievements, limitations and some diagnostics of the current effort are also presented.
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; Richard, Elizabeth E.
2011-01-01
On October 18, 2010, the NASA Human Health and Performance center (NHHPC) was opened to enable collaboration among government, academic and industry members. Membership rapidly grew to 60 members (http://nhhpc.nasa.gov ) and members began identifying collaborative projects as detailed below. In addition, a first workshop in open collaboration and innovation was conducted on January 19, 2011 by the NHHPC resulting in additional challenges and projects for further development. This first workshop was a result of the SLSD successes in running open innovation challenges over the past two years. In 2008, the NASA Johnson Space Center, Space Life Sciences Directorate (SLSD) began pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical problems. From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external challenges were conducted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive platform, customized to NASA use, and promoted as NASA@Work. The results from the 34 challenges involved not only technical solutions that were reported previously at the 61st IAC, but also the formation of new collaborative relationships. For example, the TopCoder pilot was expanded by the NASA Space Operations Mission Directorate to the NASA Tournament Lab in collaboration with Harvard Business School and TopCoder. Building on these initial successes, the NHHPC workshop in January of 2011, and ongoing NHHPC member discussions, several important collaborations are in development: Space Act Agreement between NASA and GE for collaborative projects, NASA and academia for a Visual Impairment / Intracranial Hypertension summit (February 2011), NASA and the DoD through the Defense Venture Catalyst Initiative (DeVenCI) for a technical needs workshop (June 2011), NASA and the San Diego Zoo in Biomimicry, NASA and the FAA Center of Excellence for Commercial Space Flight for collaborative projects, NASA and the FDA concerning automatic external defibrillators, and NASA and Tufts University for an education pilot. These and other collaborations will be detailed in the paper demonstrating that a government-sponsored convening entity (the NHHPC) can facilitate industry, academic, and non-profit collaborations for products of mutual benefit.
NASA Astrophysics Data System (ADS)
Giebel, B. M.; Riemer, D. D.; Swart, P. K.
2008-12-01
Determining δ13C values for reduced hydrocarbons in atmospheric samples is emerging as an important area of interest in isotopic analytical chemistry. The importance of stable isotopic data stems from its usefulness to differentiate between multiple sources and allows for an assessment of changing source structure and source strength in a constantly changing environment. Though much stable isotopic work is available on CH4 and other VOCs, particularly NMHCs, few studies have focused on oxygenated volatile organic compounds (OVOCs) such as methanol, ethanol, acetone, and propanal. Both anthropogenic and biogenic sources exist for these OVOCs and their role in atmospheric chemistry is important. The OVOCs of interest here are found in very low concentrations in ambient air (low ppbv to high pptv) and thus provide unique challenges for analysis by GC-C-IRMS. To address the challenges of measuring OVOCs, a Hewlett Packard 6890 gas chromatograph interfaced with a Europa Scientific Geo 20-20 IRMS was modified to accept ambient atmospheric samples. To sharpen peak shape all dead volume within the system was minimized; starting with the addition of a fused silica combustion tube (0.25 mm i.d.) containing Cu, Pt, or Ni wires (0.1 mm dia.). To assist water removal from the sample stream before delivery to the IRMS a small volume nafion dryer (0.20 mm i.d.) and a water-trap submersed in a dry-ice / acetone slurry were tested individually. Deactivated fused silica (0.1 mm i.d.) joins the custom designed open split to the ion source and effectively decreases dead volume while maintaining chromatographic separation and desired source pressure. To decrease the variability of the instrumentation, and to increase the total amount of carbon at the ion source, total carrier gas flow is reduced to 0.7 mL/min. Reference gas addition is manually facilitated by a six port rotary valve upstream of the open split and delivers diluted CO2 reference gas (0.1% CO2 in He) directly to the ion source while maintaining continuous flow conditions from the gas chromatograph. Experimental results of initial biogenic source sampling will be presented and future directions will be discussed.
Open-Source Data and the Study of Homicide.
Parkin, William S; Gruenewald, Jeff
2015-07-20
To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events. © The Author(s) 2015.
EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.
Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice
2015-01-01
The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.
EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome
Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice
2015-01-01
The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232
Development of a data entry auditing protocol and quality assurance for a tissue bank database.
Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L
2012-03-01
Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.
OpenComet: An automated tool for comet assay image analysis
Gyori, Benjamin M.; Venkatachalam, Gireedhar; Thiagarajan, P.S.; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time. PMID:24624335
OpenComet: an automated tool for comet assay image analysis.
Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.
Gloyd, Stephen; Wagenaar, Bradley H; Woelk, Godfrey B; Kalibala, Samuel
2016-01-01
HIV programme data from routine health information systems (RHIS) and personal health information (PHI) provide ample opportunities for secondary data analysis. However, these data pose unique opportunities and challenges for use in health system monitoring, along with process and impact evaluations. Analyses focused on retrospective case reviews of four of the HIV-related studies published in this JIAS supplement. We identify specific opportunities and challenges with respect to the secondary analysis of RHIS and PHI data. Challenges working with both HIV-related RHIS and PHI included missing, inconsistent and implausible data; rapidly changing indicators; systematic differences in the utilization of services; and patient linkages over time and different data sources. Specific challenges among RHIS data included numerous registries and indicators, inconsistent data entry, gaps in data transmission, duplicate registry of information, numerator-denominator incompatibility and infrequent use of data for decision-making. Challenges specific to PHI included the time burden for busy providers, the culture of lax charting, overflowing archives for paper charts and infrequent chart review. Many of the challenges that undermine effective use of RHIS and PHI data for analyses are related to the processes and context of collecting the data, excessive data requirements, lack of knowledge of the purpose of data and the limited use of data among those generating the data. Recommendations include simplifying data sources, analysis and reporting; conducting systematic data quality audits; enhancing the use of data for decision-making; promoting routine chart review linked with simple patient tracking systems; and encouraging open access to RHIS and PHI data for increased use.
Using a pseudo-thermal light source to teach spatial coherence
NASA Astrophysics Data System (ADS)
Pieper, K.; Bergmann, A.; Dengler, R.; Rockstuhl, C.
2018-07-01
Teaching students spatial coherence constitutes a challenge. On the one hand, discussing it theoretically requires a quite demanding mathematical breadth. On the other hand, discussing it experimentally is hardly possible as coherence usually cannot be directly observed. To solve this problem, we show, by studying the contrast of interference patterns of a double slit, that speckles of a pseudo-thermal light source, consisting of a laser and a rotating diffuser disc, are equivalent to the spatial extent of coherent areas of a thermal light source. Coherent areas are spatial regions within which light can be considered as coherent. The unique advantage of such pseudo-thermal light source is the opportunity to directly observe the spatial extent of the coherent areas. This renders the phenomena perceptible and accessible by various experiments, as described in this contribution. This opens modern paths to teach spatial coherence to students with a notably reduced order of abstraction.
Fostering successful scientific software communities
NASA Astrophysics Data System (ADS)
Bangerth, W.; Heister, T.; Hwang, L.; Kellogg, L. H.
2016-12-01
Developing sustainable open source software packages for the sciences appears at first to be primarily a technical challenge: How can one create stable and robust algorithms, appropriate software designs, sufficient documentation, quality assurance strategies such as continuous integration and test suites, or backward compatibility approaches that yield high-quality software usable not only by the authors, but also the broader community of scientists? However, our experience from almost two decades of leading the development of the deal.II software library (http://www.dealii.org, a widely-used finite element package) and the ASPECT code (http://aspect.dealii.org, used to simulate convection in the Earth's mantle) has taught us that technical aspects are not the most difficult ones in scientific open source software. Rather, it is the social challenge of building and maintaining a community of users and developers interested in answering questions on user forums, contributing code, and jointly finding solutions to common technical and non-technical challenges. These problems are posed in an environment where project leaders typically have no resources to reward the majority of contributors, where very few people are specifically paid for the work they do on the project, and with frequent turnover of contributors as project members rotate into and out of jobs. In particular, much software work is done by graduate students who may become fluent enough in a software only a year or two before they leave academia. We will discuss strategies we have found do and do not work in maintaining and growing communities around the scientific software projects we lead. Specifically, we will discuss the management style necessary to keep contributors engaged, ways to give credit where credit is due, and structuring documentation to decrease reliance on forums and thereby allow user communities to grow without straining those who answer questions.
ADMS State of the Industry and Gap Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.
2016-03-31
An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less
ERIC Educational Resources Information Center
Kapor, Mitchell
2005-01-01
Open source software projects involve the production of goods, but in software projects, the "goods" consist of information. The open source model is an alternative to the conventional centralized, command-and-control way in which things are usually made. In contrast, open source projects are genuinely decentralized and transparent. Transparent…
Ardal, Christine; Alstadsæter, Annette; Røttingen, John-Arne
2011-09-28
Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents.
Open Source Paradigm: A Synopsis of The Cathedral and the Bazaar for Health and Social Care.
Benson, Tim
2016-07-04
Open source software (OSS) is becoming more fashionable in health and social care, although the ideas are not new. However progress has been slower than many had expected. The purpose is to summarise the Free/Libre Open Source Software (FLOSS) paradigm in terms of what it is, how it impacts users and software engineers and how it can work as a business model in health and social care sectors. Much of this paper is a synopsis of Eric Raymond's seminal book The Cathedral and the Bazaar, which was the first comprehensive description of the open source ecosystem, set out in three long essays. Direct quotes from the book are used liberally, without reference to specific passages. The first part contrasts open and closed source approaches to software development and support. The second part describes the culture and practices of the open source movement. The third part considers business models. A key benefit of open source is that users can access and collaborate on improving the software if they wish. Closed source code may be regarded as a strategic business risk that that may be unacceptable if there is an open source alternative. The sharing culture of the open source movement fits well with that of health and social care.
The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the opening of the leaderboard to its Proteogenomics Computational DREAM Challenge. The leadership board remains open for submissions during September 25, 2017 through October 8, 2017, with the Challenge expected to run until November 17, 2017.
Weather forecasting with open source software
NASA Astrophysics Data System (ADS)
Rautenhaus, Marc; Dörnbrack, Andreas
2013-04-01
To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.
Open Source Software Development
2011-01-01
Software, 2002, 149(1), 3-17. 3. DiBona , C., Cooper, D., and Stone, M. (Eds.), Open Sources 2.0, 2005, O’Reilly Media, Sebastopol, CA. Also see, C... DiBona , S. Ockman, and M. Stone (Eds.). Open Sources: Vocides from the Open Source Revolution, 1999. O’Reilly Media, Sebastopol, CA. 4. Ducheneaut, N
NASA Technical Reports Server (NTRS)
Laymon, Charles A,; Kress, Martin P.; McCracken, Jeff E.; Spehn, Stephen L.; Tanner, Steve
2011-01-01
The Arctic Collaborative Environment (ACE) project is a new international partnership for information sharing to meet the challenges of addressing Arctic. The goal of ACE is to create an open source, web-based, multi-national monitoring, analysis, and visualization decision-support system for Arctic environmental assessment, management, and sustainability. This paper will describe the concept, system architecture, and data products that are being developed and disseminated among partners and independent users through remote access.
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
IRAF and STSDAS under the new ALPHA architecture
NASA Technical Reports Server (NTRS)
Zarate, N. R.
1992-01-01
Digital's next generation RISC architecture, known as ALPHA, presents many IRAF system portability questions and challenges to both site managers and end users. DEC promises to support the ULTRIX, VMS, and OSF/1 operating systems, which should allow IRAF to be ported to the new architecture at either the program executable level (using VEST), or at the source level, where IRAF can be tuned for greater performance. These notes highlight some of the details of porting IRAF to OpenVMS on the ALPHA architecture.
Reference software implementation for GIFTS ground data processing
NASA Astrophysics Data System (ADS)
Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.
2006-08-01
Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.
Current and emerging laser sensors for greenhouse gas sensing and leak detection
NASA Astrophysics Data System (ADS)
Frish, Michael B.
2014-05-01
To reduce atmospheric accumulation of the greenhouse gases methane and carbon dioxide, networks of continuously operating sensors that monitor and map their sources are desirable. In this paper, we discuss advances in laser-based open-path leak detectors, as well as technical and economic challenges inhibiting widespread sensor deployment for "ubiquitous monitoring". We describe permanently-installed, wireless, solar-powered sensors that overcome previous installation and maintenance difficulties while providing autonomous real-time leak reporting without false alarms.
Increasing Flight Software Reuse with OpenSatKit
NASA Technical Reports Server (NTRS)
McComas, David C.
2018-01-01
In January 2015 the NASA Goddard Space Flight Center (GSFC) released the Core Flight System (cFS) as open source under the NASA Open Source Agreement (NOSA) license. The cFS is based on flight software (FSW) developed for 12 spacecraft spanning nearly two decades of effort and it can provide about a third of the FSW functionality for a low-earth orbiting scientific spacecraft. The cFS is a FSW framework that is portable, configurable, and extendable using a product line deployment model. However, the components are maintained separately so the user must configure, integrate, and deploy them as a cohesive functional system. This can be very challenging especially for organizations such as universities building cubesats that have minimal experience developing FSW. Supporting universities was one of the primary motivators for releasing the cFS under NOSA. This paper describes the OpenSatKit that was developed to address the cFS deployment challenges and to serve as a cFS training platform for new users. It provides a fully functional out-of-the box software system that includes NASA's cFS, Ball Aerospace's command and control system COSMOS, and a NASA dynamic simulator called 42. The kit is freely available since all of the components have been released as open source. The kit runs on a Linux platform, includes 8 cFS applications, several kit-specific applications, and built in demos illustrating how to use key application features. It also includes the software necessary to port the cFS to a Raspberry Pi and instructions for configuring COSMOS to communicate with the target. All of the demos and test scripts can be rerun unchanged with the cFS running on the Raspberry Pi. The cFS uses a 3-tiered layered architecture including a platform abstraction layer, a Core Flight Executive (cFE) middle layer, and an application layer. Similar to smart phones, the cFS application layer is the key architectural feature for users to extend the FSW functionality to meet their mission-specific requirements. The platform abstraction layer and the cFE layers go a step further than smart phones by providing a platform-agnostic Application Programmer Interface (API) that allows applications to run unchanged on different platforms. OpenSatKit can serve two significant architectural roles that will further help the adoption of the cFS and help create a community of users that can share assets. First, the kit is being enhanced to automate the integration of applications with the goal of creating a virtual cFS "App Store".. Second, a platform certification test suite can be developed that would allow users to verify the port of the cFS to a new platform. This paper will describe the current state of these efforts and future plans.
Open-source hardware for medical devices
2016-01-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528
Open-source hardware for medical devices.
Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold
2016-04-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.
The case for open-source software in drug discovery.
DeLano, Warren L
2005-02-01
Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.
Open Source Bayesian Models. 1. Application to ADME/Tox and Drug Discovery Datasets.
Clark, Alex M; Dole, Krishna; Coulon-Spektor, Anna; McNutt, Andrew; Grass, George; Freundlich, Joel S; Reynolds, Robert C; Ekins, Sean
2015-06-22
On the order of hundreds of absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) models have been described in the literature in the past decade which are more often than not inaccessible to anyone but their authors. Public accessibility is also an issue with computational models for bioactivity, and the ability to share such models still remains a major challenge limiting drug discovery. We describe the creation of a reference implementation of a Bayesian model-building software module, which we have released as an open source component that is now included in the Chemistry Development Kit (CDK) project, as well as implemented in the CDD Vault and in several mobile apps. We use this implementation to build an array of Bayesian models for ADME/Tox, in vitro and in vivo bioactivity, and other physicochemical properties. We show that these models possess cross-validation receiver operator curve values comparable to those generated previously in prior publications using alternative tools. We have now described how the implementation of Bayesian models with FCFP6 descriptors generated in the CDD Vault enables the rapid production of robust machine learning models from public data or the user's own datasets. The current study sets the stage for generating models in proprietary software (such as CDD) and exporting these models in a format that could be run in open source software using CDK components. This work also demonstrates that we can enable biocomputation across distributed private or public datasets to enhance drug discovery.
Open Source Bayesian Models. 1. Application to ADME/Tox and Drug Discovery Datasets
2015-01-01
On the order of hundreds of absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) models have been described in the literature in the past decade which are more often than not inaccessible to anyone but their authors. Public accessibility is also an issue with computational models for bioactivity, and the ability to share such models still remains a major challenge limiting drug discovery. We describe the creation of a reference implementation of a Bayesian model-building software module, which we have released as an open source component that is now included in the Chemistry Development Kit (CDK) project, as well as implemented in the CDD Vault and in several mobile apps. We use this implementation to build an array of Bayesian models for ADME/Tox, in vitro and in vivo bioactivity, and other physicochemical properties. We show that these models possess cross-validation receiver operator curve values comparable to those generated previously in prior publications using alternative tools. We have now described how the implementation of Bayesian models with FCFP6 descriptors generated in the CDD Vault enables the rapid production of robust machine learning models from public data or the user’s own datasets. The current study sets the stage for generating models in proprietary software (such as CDD) and exporting these models in a format that could be run in open source software using CDK components. This work also demonstrates that we can enable biocomputation across distributed private or public datasets to enhance drug discovery. PMID:25994950
NASA Astrophysics Data System (ADS)
Hicks, S. D.; Aufdenkampe, A. K.; Horsburgh, J. S.; Arscott, D. B.; Muenz, T.; Bressler, D. W.
2016-12-01
The explosion in DIY open-source hardware and software has resulted in the development of affordable and accessible technologies, like drones and weather stations, that can greatly assist the general public in monitoring environmental health and its degradation. It is widely recognized that education and support of audiences in pursuit of STEM literacy and the application of emerging technologies is a challenge for the future of citizen science and for preparing high school graduates to be actively engaged in environmental stewardship. It is also clear that detecting environmental change/degradation over time and space will be greatly enhanced with expanded use of networked, remote monitoring technologies by watershed organizations and citizen scientists if data collection and reporting are properly carried out and curated. However, there are few focused efforts to link citizen scientists and school programs with these emerging tools. We have started a multi-year program to develop hardware and teaching materials for training students and citizen scientists about the use of open source hardware in environmental monitoring. Scientists and educators around the world have started building their own dataloggers and devices using a variety of boards based on open source electronics. This new hardware is now providing researchers with an inexpensive alternative to commercial data logging and transmission hardware. We will present a variety of hardware solutions using the Arduino-compatible EnviroDIY Mayfly board (http://envirodiy.org/mayfly) that can be used to build and deploy a rugged environmental monitoring station using a wide variety of sensors and options, giving the users a fully customizable device for making measurements almost anywhere. A database and visualization system is being developed that will allow the users to view and manage the data their devices are collecting. We will also present our plan for developing curricula and leading workshops to various school programs and citizen scientist groups to teach them how to build, deploy, and maintain their own environmental monitoring stations.
The Confluence of GIS, Cloud and Open Source, Enabling Big Raster Data Applications
NASA Astrophysics Data System (ADS)
Plesea, L.; Emmart, C. B.; Boller, R. A.; Becker, P.; Baynes, K.
2016-12-01
The rapid evolution of available cloud services is profoundly changing the way applications are being developed and used. Massive object stores, service scalability, continuous integration are some of the most important cloud technology advances that directly influence science applications and GIS. At the same time, more and more scientists are using GIS platforms in their day to day research. Yet with new opportunities there are always some challenges. Given the large amount of data commonly required in science applications, usually large raster datasets, connectivity is one of the biggest problems. Connectivity has two aspects, one being the limited bandwidth and latency of the communication link due to the geographical location of the resources, the other one being the interoperability and intrinsic efficiency of the interface protocol used to connect. NASA and Esri are actively helping each other and collaborating on a few open source projects, aiming to provide some of the core technology components to directly address the GIS enabled data connectivity problems. Last year Esri contributed LERC, a very fast and efficient compression algorithm to the GDAL/MRF format, which itself is a NASA/Esri collaboration project. The MRF raster format has some cloud aware features that make it possible to build high performance web services on cloud platforms, as some of the Esri projects demonstrate. Currently, another NASA open source project, the high performance OnEarth WMTS server is being refactored and enhanced to better integrate with MRF, GDAL and Esri software. Taken together, the GDAL, MRF and OnEarth form the core of an open source CloudGIS toolkit that is already showing results. Since it is well integrated with GDAL, which is the most common interoperability component of GIS applications, this approach should improve the connectivity and performance of many science and GIS applications in the cloud.
Wenig, Philip; Odermatt, Juergen
2010-07-30
Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net.
Choosing Open Source ERP Systems: What Reasons Are There For Doing So?
NASA Astrophysics Data System (ADS)
Johansson, Björn; Sudzina, Frantisek
Enterprise resource planning (ERP) systems attract a high attention and open source software does it as well. The question is then if, and if so, when do open source ERP systems take off. The paper describes the status of open source ERP systems. Based on literature review of ERP system selection criteria based on Web of Science articles, it discusses reported reasons for choosing open source or proprietary ERP systems. Last but not least, the article presents some conclusions that could act as input for future research. The paper aims at building up a foundation for the basic question: What are the reasons for an organization to adopt open source ERP systems.
ProteinTracker: an application for managing protein production and purification
2012-01-01
Background Laboratories that produce protein reagents for research and development face the challenge of deciding whether to track batch-related data using simple file based storage mechanisms (e.g. spreadsheets and notebooks), or commit the time and effort to install, configure and maintain a more complex laboratory information management system (LIMS). Managing reagent data stored in files is challenging because files are often copied, moved, and reformatted. Furthermore, there is no simple way to query the data if/when questions arise. Commercial LIMS often include additional modules that may be paid for but not actually used, and often require software expertise to truly customize them for a given environment. Findings This web-application allows small to medium-sized protein production groups to track data related to plasmid DNA, conditioned media samples (supes), cell lines used for expression, and purified protein information, including method of purification and quality control results. In addition, a request system was added that includes a means of prioritizing requests to help manage the high demand of protein production resources at most organizations. ProteinTracker makes extensive use of existing open-source libraries and is designed to track essential data related to the production and purification of proteins. Conclusions ProteinTracker is an open-source web-based application that provides organizations with the ability to track key data involved in the production and purification of proteins and may be modified to meet the specific needs of an organization. The source code and database setup script can be downloaded from http://sourceforge.net/projects/proteintracker. This site also contains installation instructions and a user guide. A demonstration version of the application can be viewed at http://www.proteintracker.org. PMID:22574679
Experimental Definition and Validation of Protein Coding Transcripts in Chlamydomonas reinhardtii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kourosh Salehi-Ashtiani; Jason A. Papin
Algal fuel sources promise unsurpassed yields in a carbon neutral manner that minimizes resource competition between agriculture and fuel crops. Many challenges must be addressed before algal biofuels can be accepted as a component of the fossil fuel replacement strategy. One significant challenge is that the cost of algal fuel production must become competitive with existing fuel alternatives. Algal biofuel production presents the opportunity to fine-tune microbial metabolic machinery for an optimal blend of biomass constituents and desired fuel molecules. Genome-scale model-driven algal metabolic design promises to facilitate both goals by directing the utilization of metabolites in the complex, interconnectedmore » metabolic networks to optimize production of the compounds of interest. Using Chlamydomonas reinhardtii as a model, we developed a systems-level methodology bridging metabolic network reconstruction with annotation and experimental verification of enzyme encoding open reading frames. We reconstructed a genome-scale metabolic network for this alga and devised a novel light-modeling approach that enables quantitative growth prediction for a given light source, resolving wavelength and photon flux. We experimentally verified transcripts accounted for in the network and physiologically validated model function through simulation and generation of new experimental growth data, providing high confidence in network contents and predictive applications. The network offers insight into algal metabolism and potential for genetic engineering and efficient light source design, a pioneering resource for studying light-driven metabolism and quantitative systems biology. Our approach to generate a predictive metabolic model integrated with cloned open reading frames, provides a cost-effective platform to generate metabolic engineering resources. While the generated resources are specific to algal systems, the approach that we have developed is not specific to algae and can be readily expanded to other microbial systems as well as higher plants and animals.« less
NASA Astrophysics Data System (ADS)
Schuetze, C.; Sauer, U.; Dietrich, P.
2015-12-01
Reliable detection and assessment of near-surface CO2 emissions from natural or anthropogenic sources require the application of various monitoring tools at different spatial scales. Especially, optical remote sensing tools for atmospheric monitoring have the potential to measure integrally CO2 emissions over larger scales (> 10.000m2). Within the framework of the MONACO project ("Monitoring approach for geological CO2 storage sites using a hierarchical observation concept"), an integrative hierarchical monitoring concept was developed and validated at different field sites with the aim to establish a modular observation strategy including investigations in the shallow subsurface, at ground surface level and the lower atmospheric boundary layer. The main aims of the atmospheric monitoring using optical remote sensing were the observation of the gas dispersion in to the near-surface atmosphere, the determination of maximum concentration values and identification of the main challenges associated with the monitoring of extended emission sources with the proposed methodological set up under typical environmental conditions. The presentation will give an overview about several case studies using the integrative approach of Open-Path Fourier Transform Infrared spectroscopy (OP FTIR) in combination with in situ measurements. As a main result, the method was validated as possible approach for continuous monitoring of the atmospheric composition, in terms of integral determination of GHG concentrations and to identify target areas which are needed to be investigated more in detail. Especially the data interpretation should closely consider the micrometeorological conditions. Technical aspects concerning robust equipment, experimental set up and fast data processing algorithms have to be taken into account for the enhanced automation of atmospheric monitoring.
Behind Linus's Law: Investigating Peer Review Processes in Open Source
ERIC Educational Resources Information Center
Wang, Jing
2013-01-01
Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…
ERIC Educational Resources Information Center
Kisworo, Marsudi Wahyu
2016-01-01
Information and Communication Technology (ICT)-supported learning using free and open source platform draws little attention as open source initiatives were focused in secondary or tertiary educations. This study investigates possibilities of ICT-supported learning using open source platform for primary educations. The data of this study is taken…
An Analysis of Open Source Security Software Products Downloads
ERIC Educational Resources Information Center
Barta, Brian J.
2014-01-01
Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…
Research on OpenStack of open source cloud computing in colleges and universities’ computer room
NASA Astrophysics Data System (ADS)
Wang, Lei; Zhang, Dandan
2017-06-01
In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
2011-01-01
Background Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. Methods A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Results Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. Conclusions We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents. PMID:21955914
The 2017 Bioinformatics Open Source Conference (BOSC)
Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year’s theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest. PMID:29118973
The 2017 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather
2017-01-01
The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year's theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest.
Crowdsourcing for Challenging Technical Problems - It Works!
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.
2011-01-01
The NASA Johnson Space Center Space Life Sciences Directorate (SLSD) and Wyle Integrated Science and Engineering (Wyle) will conduct a one-day business cluster at the 62nd IAC so that IAC attendees will understand the benefits of open innovation (crowdsourcing), review successful results of conducting technical challenges in various open innovation projects, and learn how an organization can effectively deploy these new problem solving tools to innovate more efficiently and effectively. Results from both the SLSD open innovation pilot program and the open innovation workshop conducted by the NASA Human Health and Performance Center will be discussed. NHHPC members will be recruited to participate in the business cluster (see membership http://nhhpc.nasa.gov) and as IAF members. Crowdsourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by the organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the SLSD, with the support of Wyle, established and implemented pilot projects in open innovation (crowdsourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, called Challenges by some open innovation service providers, and were then posted externally to seek solutions to these problems. In addition, an open call was issued internally to NASA employees Agency wide (11 Field Centers and NASA HQ) using an open innovation service provider crowdsourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowdsourcing platform designed for use internal to an organization and customized for NASA use, and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging.
The Efficient Utilization of Open Source Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baty, Samuel R.
These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide keymore » insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.« less
Radioactive source security: the cultural challenges.
Englefield, Chris
2015-04-01
Radioactive source security is an essential part of radiation protection. Sources can be abandoned, lost or stolen. If they are stolen, they could be used to cause deliberate harm and the risks are varied and significant. There is a need for a global security protection system and enhanced capability to achieve this. The establishment of radioactive source security requires 'cultural exchanges'. These exchanges include collaboration between: radiation protection specialists and security specialists; the nuclear industry and users of radioactive sources; training providers and regulators/users. This collaboration will facilitate knowledge and experience exchange for the various stakeholder groups, beyond those already provided. This will promote best practice in both physical and information security and heighten security awareness generally. Only if all groups involved are prepared to open their minds to listen to and learn from, each other will a suitable global level of control be achieved. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Survey of MapReduce frame operation in bioinformatics.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
2014-07-01
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
MFIX simulation of NETL/PSRI challenge problem of circulating fluidized bed
Li, Tingwen; Dietiker, Jean-François; Shahnam, Mehrdad
2012-12-01
In this paper, numerical simulations of NETL/PSRI challenge problem of circulating fluidized bed (CFB) using the open-source code Multiphase Flow with Interphase eXchange (MFIX) are reported. Two rounds of simulation results are reported including the first-round blind test and the second-round modeling refinement. Three-dimensional high fidelity simulations are conducted to model a 12-inch diameter pilot-scale CFB riser. Detailed comparisons between numerical results and experimental data are made with respect to axial pressure gradient profile, radial profiles of solids velocity and solids mass flux along different radial directions at various elevations for operating conditions covering different fluidization regimes. Overall, the numericalmore » results show that CFD can predict the complex gas–solids flow behavior in the CFB riser reasonably well. In addition, lessons learnt from modeling this challenge problem are presented.« less
Advancing global marine biogeography research with open-source GIS software and cloud-computing
Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick
2012-01-01
Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.
The 2015 Bioinformatics Open Source Conference (BOSC 2015).
Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica
2016-02-01
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Nurturing reliable and robust open-source scientific software
NASA Astrophysics Data System (ADS)
Uieda, L.; Wessel, P.
2017-12-01
Scientific results are increasingly the product of software. The reproducibility and validity of published results cannot be ensured without access to the source code of the software used to produce them. Therefore, the code itself is a fundamental part of the methodology and must be published along with the results. With such a reliance on software, it is troubling that most scientists do not receive formal training in software development. Tools such as version control, continuous integration, and automated testing are routinely used in industry to ensure the correctness and robustness of software. However, many scientist do not even know of their existence (although efforts like Software Carpentry are having an impact on this issue; software-carpentry.org). Publishing the source code is only the first step in creating an open-source project. For a project to grow it must provide documentation, participation guidelines, and a welcoming environment for new contributors. Expanding the project community is often more challenging than the technical aspects of software development. Maintainers must invest time to enforce the rules of the project and to onboard new members, which can be difficult to justify in the context of the "publish or perish" mentality. This problem will continue as long as software contributions are not recognized as valid scholarship by hiring and tenure committees. Furthermore, there are still unsolved problems in providing attribution for software contributions. Many journals and metrics of academic productivity do not recognize citations to sources other than traditional publications. Thus, some authors choose to publish an article about the software and use it as a citation marker. One issue with this approach is that updating the reference to include new contributors involves writing and publishing a new article. A better approach would be to cite a permanent archive of individual versions of the source code in services such as Zenodo (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.
Open Source, Openness, and Higher Education
ERIC Educational Resources Information Center
Wiley, David
2006-01-01
In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…
The Emergence of Open-Source Software in North America
ERIC Educational Resources Information Center
Pan, Guohua; Bonk, Curtis J.
2007-01-01
Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…
ACToR Chemical Structure processing using Open Source ...
ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d
OpenFDA: an innovative platform providing access to a wealth of FDA's publicly available data.
Kass-Hout, Taha A; Xu, Zhiheng; Mohebbi, Matthew; Nelsen, Hans; Baker, Adam; Levine, Jonathan; Johanson, Elaine; Bright, Roselie A
2016-05-01
The objective of openFDA is to facilitate access and use of big important Food and Drug Administration public datasets by developers, researchers, and the public through harmonization of data across disparate FDA datasets provided via application programming interfaces (APIs). Using cutting-edge technologies deployed on FDA's new public cloud computing infrastructure, openFDA provides open data for easier, faster (over 300 requests per second per process), and better access to FDA datasets; open source code and documentation shared on GitHub for open community contributions of examples, apps and ideas; and infrastructure that can be adopted for other public health big data challenges. Since its launch on June 2, 2014, openFDA has developed four APIs for drug and device adverse events, recall information for all FDA-regulated products, and drug labeling. There have been more than 20 million API calls (more than half from outside the United States), 6000 registered users, 20,000 connected Internet Protocol addresses, and dozens of new software (mobile or web) apps developed. A case study demonstrates a use of openFDA data to understand an apparent association of a drug with an adverse event. With easier and faster access to these datasets, consumers worldwide can learn more about FDA-regulated products. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.
OpenFDA: an innovative platform providing access to a wealth of FDA’s publicly available data
Kass-Hout, Taha A; Mohebbi, Matthew; Nelsen, Hans; Baker, Adam; Levine, Jonathan; Johanson, Elaine; Bright, Roselie A
2016-01-01
Objective The objective of openFDA is to facilitate access and use of big important Food and Drug Administration public datasets by developers, researchers, and the public through harmonization of data across disparate FDA datasets provided via application programming interfaces (APIs). Materials and Methods Using cutting-edge technologies deployed on FDA’s new public cloud computing infrastructure, openFDA provides open data for easier, faster (over 300 requests per second per process), and better access to FDA datasets; open source code and documentation shared on GitHub for open community contributions of examples, apps and ideas; and infrastructure that can be adopted for other public health big data challenges. Results Since its launch on June 2, 2014, openFDA has developed four APIs for drug and device adverse events, recall information for all FDA-regulated products, and drug labeling. There have been more than 20 million API calls (more than half from outside the United States), 6000 registered users, 20,000 connected Internet Protocol addresses, and dozens of new software (mobile or web) apps developed. A case study demonstrates a use of openFDA data to understand an apparent association of a drug with an adverse event. Conclusion With easier and faster access to these datasets, consumers worldwide can learn more about FDA-regulated products. PMID:26644398
de Souza, Andrea; Bittker, Joshua; Lahr, David; Brudz, Steve; Chatwin, Simon; Oprea, Tudor I.; Waller, Anna; Yang, Jeremy; Southall, Noel; Guha, Rajarshi; Schurer, Stephan; Vempati, Uma; Southern, Mark R.; Dawson, Eric S.; Clemons, Paul A.; Chung, Thomas D.Y.
2015-01-01
Recent industry-academic partnerships involve collaboration across disciplines, locations, and organizations using publicly funded “open-access” and proprietary commercial data sources. These require effective integration of chemical and biological information from diverse data sources, presenting key informatics, personnel, and organizational challenges. BARD (BioAssay Research Database) was conceived to address these challenges and to serve as a community-wide resource and intuitive web portal for public-sector chemical biology data. Its initial focus is to enable scientists to more effectively use the NIH Roadmap Molecular Libraries Program (MLP) data generated from 3-year pilot and 6-year production phases of the Molecular Libraries Probe Production Centers Network (MLPCN), currently in its final year. BARD evolves the current data standards through structured assay and result annotations that leverage the BioAssay Ontology (BAO) and other industry-standard ontologies, and a core hierarchy of assay definition terms and data standards defined specifically for small-molecule assay data. We have initially focused on migrating the highest-value MLP data into BARD and bringing it up to this new standard. We review the technical and organizational challenges overcome by the inter-disciplinary BARD team, veterans of public and private sector data-integration projects, collaborating to describe (functional specifications), design (technical specifications), and implement this next-generation software solution. PMID:24441647
Open Genetic Code: on open source in the life sciences.
Deibel, Eric
2014-01-01
The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.
The Open Source Teaching Project (OSTP): Research Note.
ERIC Educational Resources Information Center
Hirst, Tony
The Open Source Teaching Project (OSTP) is an attempt to apply a variant of the successful open source software approach to the development of educational materials. Open source software is software licensed in such a way as to allow anyone the right to modify and use it. From such a simple premise, a whole industry has arisen, most notably in the…
Free for All: Open Source Software
ERIC Educational Resources Information Center
Schneider, Karen
2008-01-01
Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…
Rosner, David; Markowitz, Gerald; Chowkwanyun, Merlin
2018-02-01
As a result of a legal mechanism called discovery, the authors accumulated millions of internal corporate and trade association documents related to the introduction of new products and chemicals into workplaces and commerce. What did these private entities discuss among themselves and with their experts? The plethora of documents, both a blessing and a curse, opened new sources and interesting questions about corporate and regulatory histories. But they also posed an almost insurmountable challenge to historians. Thus emerged ToxicDocs, possible only with a technological innovation known as "Big Data." That refers to the sheer volume of new digital data and to the computational power to analyze them. Users will be able to identify what firms knew (or did not know) about the dangers of toxic substances in their products-and when. The database opens many areas to inquiry including environmental studies, business history, government regulation, and public policy. ToxicDocs will remain a resource free and open to all, anywhere in the world.
Temporal Variability in the Deglutition Literature
Molfenter, Sonja M.; Steele, Catriona M.
2013-01-01
A literature review was conducted on temporal measures of swallowing in healthy individuals with the purpose of determining the degree of variability present in such measures within the literature. A total of 46 studies that met inclusion criteria were reviewed. The definitions and descriptive statistics for all reported temporal parameters were compiled for meta-analysis. In total, 119 different temporal parameters were found in the literature. The three most-frequently occurring durational measures were: UES opening, laryngeal closure and hyoid movement. The three most-frequently occurring interval measures were: stage transition duration, pharyngeal transit time and duration from laryngeal closure to UES opening. Subtle variations in operational definitions across studies were noted, making the comparison of data challenging. Analysis of forest plots compiling descriptive statistical data (means and 95% confidence intervals) across studies revealed differing degrees of variability across durations and intervals. Two parameters (UES opening duration and the laryngeal-closure-to-UES-opening interval) demonstrated the least variability, reflected by small ranges for mean values and tight confidence intervals. Trends emerged for factors of bolus size and participant age for some variables. Other potential sources of variability are discussed. PMID:22366761
Reflections on the role of open source in health information system interoperability.
Sfakianakis, S; Chronaki, C E; Chiarugi, F; Conforti, F; Katehakis, D G
2007-01-01
This paper reflects on the role of open source in health information system interoperability. Open source is a driving force in computer science research and the development of information systems. It facilitates the sharing of information and ideas, enables evolutionary development and open collaborative testing of code, and broadens the adoption of interoperability standards. In health care, information systems have been developed largely ad hoc following proprietary specifications and customized design. However, the wide deployment of integrated services such as Electronic Health Records (EHRs) over regional health information networks (RHINs) relies on interoperability of the underlying information systems and medical devices. This reflection is built on the experiences of the PICNIC project that developed shared software infrastructure components in open source for RHINs and the OpenECG network that offers open source components to lower the implementation cost of interoperability standards such as SCP-ECG, in electrocardiography. Open source components implementing standards and a community providing feedback from real-world use are key enablers of health care information system interoperability. Investing in open source is investing in interoperability and a vital aspect of a long term strategy towards comprehensive health services and clinical research.
Open Standards, Open Source, and Open Innovation: Harnessing the Benefits of Openness
ERIC Educational Resources Information Center
Committee for Economic Development, 2006
2006-01-01
Digitization of information and the Internet have profoundly expanded the capacity for openness. This report details the benefits of openness in three areas--open standards, open-source software, and open innovation--and examines the major issues in the debate over whether openness should be encouraged or not. The report explains each of these…
ERIC Educational Resources Information Center
Kaatrakoski, Heli; Littlejohn, Allison; Hood, Nina
2017-01-01
Open education, including the use of open educational resources (OER) and the adoption of open education practice, has the potential to challenge educators to change their practice in fundamental ways. This paper forms part of a larger study focusing on higher education educators' learning from and through their engagement with OER. The first part…
The 2015 Bioinformatics Open Source Conference (BOSC 2015)
Harris, Nomi L.; Cock, Peter J. A.; Lapp, Hilmar
2016-01-01
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653
NASA Astrophysics Data System (ADS)
Anderson, T.; Jencso, K. G.; Hoylman, Z. H.; Hu, J.
2015-12-01
Characterizing the mechanisms that lead to differences in forest ecosystem productivity across complex terrain remains a challenge. This difficulty can be partially attributed to the cost of installing networks of proprietary data loggers that monitor differences in the biophysical factors contributing to tree growth. Here, we describe the development and initial application of a network of open source data loggers. These data loggers are based on the Arduino platform, but were refined into a custom printed circuit board (PCB). This reduced the cost and complexity of the data loggers, which made them cheap to reproduce and reliable enough to withstand the harsh environmental conditions experienced in Ecohydrology studies. We demonstrate the utility of these loggers for high frequency, spatially-distributed measurements of sap-flux, stem growth, relative humidity, temperature, and soil water content across 36 landscape positions in the Lubrecht Experimental Forest, MT, USA. This new data logging technology made it possible to develop a spatially distributed monitoring network within the constraints of our research budget and may provide new insights into factors affecting forest productivity across complex terrain.
NASA Astrophysics Data System (ADS)
Fadakar Alghalandis, Younes
2017-05-01
Rapidly growing topic, the discrete fracture network engineering (DFNE), has already attracted many talents from diverse disciplines in academia and industry around the world to challenge difficult problems related to mining, geothermal, civil, oil and gas, water and many other projects. Although, there are few commercial software capable of providing some useful functionalities fundamental for DFNE, their costs, closed code (black box) distributions and hence limited programmability and tractability encouraged us to respond to this rising demand with a new solution. This paper introduces an open source comprehensive software package for stochastic modeling of fracture networks in two- and three-dimension in discrete formulation. Functionalities included are geometric modeling (e.g., complex polygonal fracture faces, and utilizing directional statistics), simulations, characterizations (e.g., intersection, clustering and connectivity analyses) and applications (e.g., fluid flow). The package is completely written in Matlab scripting language. Significant efforts have been made to bring maximum flexibility to the functions in order to solve problems in both two- and three-dimensions in an easy and united way that is suitable for beginners, advanced and experienced users.
Tessera: Open source software for accelerated data science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Hafen, Ryan P.; Director, Hannah M.
2014-06-30
Extracting useful, actionable information from data can be a formidable challenge for the safeguards, nonproliferation, and arms control verification communities. Data scientists are often on the “front-lines” of making sense of complex and large datasets. They require flexible tools that make it easy to rapidly reformat large datasets, interactively explore and visualize data, develop statistical algorithms, and validate their approaches—and they need to perform these activities with minimal lines of code. Existing commercial software solutions often lack extensibility and the flexibility required to address the nuances of the demanding and dynamic environments where data scientists work. To address this need,more » Pacific Northwest National Laboratory developed Tessera, an open source software suite designed to enable data scientists to interactively perform their craft at the terabyte scale. Tessera automatically manages the complicated tasks of distributed storage and computation, empowering data scientists to do what they do best: tackling critical research and mission objectives by deriving insight from data. We illustrate the use of Tessera with an example analysis of computer network data.« less
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Urban, Trinity; Ziegler, Erik; Lewis, Rob; Hafey, Chris; Sadow, Cheryl; Van den Abbeele, Annick D; Harris, Gordon J
2017-11-01
Oncology clinical trials have become increasingly dependent upon image-based surrogate endpoints for determining patient eligibility and treatment efficacy. As therapeutics have evolved and multiplied in number, the tumor metrics criteria used to characterize therapeutic response have become progressively more varied and complex. The growing intricacies of image-based response evaluation, together with rising expectations for rapid and consistent results reporting, make it difficult for site radiologists to adequately address local and multicenter imaging demands. These challenges demonstrate the need for advanced cancer imaging informatics tools that can help ensure protocol-compliant image evaluation while simultaneously promoting reviewer efficiency. LesionTracker is a quantitative imaging package optimized for oncology clinical trial workflows. The goal of the project is to create an open source zero-footprint viewer for image analysis that is designed to be extensible as well as capable of being integrated into third-party systems for advanced imaging tools and clinical trials informatics platforms. Cancer Res; 77(21); e119-22. ©2017 AACR . ©2017 American Association for Cancer Research.
Integrating personalized medical test contents with XML and XSL-FO.
Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas
2011-03-01
In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.
The 2016 Bioinformatics Open Source Conference (BOSC).
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
2016-01-01
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
This article presents an interview with Jim Hirsch, an associate superintendent for technology at Piano Independent School District in Piano, Texas. Hirsch serves as a liaison for the open technologies committee of the Consortium for School Networking. In this interview, he shares his opinion on the significance of open source in K-12.
ProtVista: visualization of protein sequence annotations.
Watkins, Xavier; Garcia, Leyla J; Pundir, Sangya; Martin, Maria J
2017-07-01
ProtVista is a comprehensive visualization tool for the graphical representation of protein sequence features in the UniProt Knowledgebase, experimental proteomics and variation public datasets. The complexity and relationships in this wealth of data pose a challenge in interpretation. Integrative visualization approaches such as provided by ProtVista are thus essential for researchers to understand the data and, for instance, discover patterns affecting function and disease associations. ProtVista is a JavaScript component released as an open source project under the Apache 2 License. Documentation and source code are available at http://ebi-uniprot.github.io/ProtVista/ . martin@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
EMISSIONS OF ORGANIC AIR TOXICS FROM OPEN ...
A detailed literature search was performed to collect and collate available data reporting emissions of toxic organic substances into the air from open burning sources. Availability of data varied according to the source and the class of air toxics of interest. Volatile organic compound (VOC) and polycyclic aromatic hydrocarbon (PAH) data were available for many of the sources. Data on semivolatile organic compounds (SVOCs) that are not PAHs were available for several sources. Carbonyl and polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofuran (PCDD/F) data were available for only a few sources. There were several sources for which no emissions data were available at all. Several observations were made including: 1) Biomass open burning sources typically emitted less VOCs than open burning sources with anthropogenic fuels on a mass emitted per mass burned basis, particularly those where polymers were concerned; 2) Biomass open burning sources typically emitted less SVOCs and PAHs than anthropogenic sources on a mass emitted per mass burned basis. Burning pools of crude oil and diesel fuel produced significant amounts of PAHs relative to other types of open burning. PAH emissions were highest when combustion of polymers was taking place; and 3) Based on very limited data, biomass open burning sources typically produced higher levels of carbonyls than anthropogenic sources on a mass emitted per mass burned basis, probably due to oxygenated structures r
The New Landscape of Ethics and Integrity in Scholarly Publishing
NASA Astrophysics Data System (ADS)
Hanson, B.
2016-12-01
Scholarly peer-reviewed publications serve five major functions: They (i) have served as the primary, useful archive of scientific progress for hundreds of years; (ii) have been one principal way that scientists, and more recently departments and institutions, are evaluated; (iii) trigger and are the source of much communication about science to the public; (iv) have been primary revenue sources for scientific societies and companies; and (v) more recently play a critical and codified role in legal and regulatory decisions and advice to governments. Recent dynamics in science as well as in society, including the growth of online communication and new revenue sources, are influencing and altering particularly the first four core functions greatly. The changes in turn are posing important new challenges to the ethics and integrity of scholarly publishing and thus science in ways that are not widely or fully appreciated. For example, the expansion of electronic publishing has raised a number of new challenges for publishers with respect to their responsibility for curating scientific knowledge and even preserving the basic integrity of a manuscript. Many challenges are realted to new or expanded financial conflicts of interest related to the use of metrics such as the Journal Impact Factor, the expansion of alternate business models such as open access and advertising, and the fact that publishers are increasingly involved in framing communication around papers they are publishing. Solutions pose new responsibilities for scientists, publishers, and scientific societies, especially around transparency in their operations.
ExpertEyes: open-source, high-definition eyetracking.
Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas
2015-03-01
ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Open-Source 3D-Printable Optics Equipment
Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104
Open-source 3D-printable optics equipment.
Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M
2013-01-01
Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.
Sampling emissions from open area sources, particularly sources of open burning, is difficult due to fast dilution of emissions and safety concerns for personnel. Representative emission samples can be difficult to obtain with flaming and explosive sources since personnel safety ...
Planetary-scale surface water detection from space
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.
2017-12-01
Accurate, efficient and high-resolution methods of surface water detection are needed for a better water management. Datasets on surface water extent and dynamics are crucial for a better understanding of natural and human-made processes, and as an input data for hydrological and hydraulic models. In spite of considerable progress in the harmonization of freely available satellite data, producing accurate and efficient higher-level surface water data products remains very challenging. This presentation will provide an overview of existing methods for surface water extent and change detection from multitemporal and multi-sensor satellite imagery. An algorithm to detect surface water changes from multi-temporal satellite imagery will be demonstrated as well as its open-source implementation (http://aqua-monitor.deltares.nl). This algorithm was used to estimate global surface water changes at high spatial resolution. These changes include climate change, land reclamation, reservoir construction/decommissioning, erosion/accretion, and many other. This presentation will demonstrate how open satellite data and open platforms such as Google Earth Engine have helped with this research.
The Visible Human Data Sets (VHD) and Insight Toolkit (ITk): Experiments in Open Source Software
Ackerman, Michael J.; Yoo, Terry S.
2003-01-01
From its inception in 1989, the Visible Human Project was designed as an experiment in open source software. In 1994 and 1995 the male and female Visible Human data sets were released by the National Library of Medicine (NLM) as open source data sets. In 2002 the NLM released the first version of the Insight Toolkit (ITk) as open source software. PMID:14728278
NASA Astrophysics Data System (ADS)
Prashad, L. C.; Christensen, P. R.; Fink, J. H.; Anwar, S.; Dickenshied, S.; Engle, E.; Noss, D.
2010-12-01
Our society currently is facing a number of major environmental challenges, most notably the threat of climate change. A multifaceted, interdisciplinary approach involving physical and social scientists, engineers and decisionmakers is critical to adequately address these complex issues. To best facilitate this interdisciplinary approach, data and models at various scales - from local to global - must be quickly and easily shared between disciplines to effectively understand environmental phenomena and human-environmental interactions. When data are acquired and studied on different scales and within different disciplines, researchers and practitioners may not be able to easily learn from each others results. For example, climate change models are often developed at a global scale, while strategies that address human vulnerability to climate change and mitigation/adaptation strategies are often assessed on a local level. Linkages between urban heat island phenomena and global climate change may be better understood with increased data flow amongst researchers and those making policy decisions. In these cases it would be useful have a single platform to share, visualize, and analyze numerical model and satellite/airborne remote sensing data with social, environmental, and economic data between researchers and practitioners. The Arizona State University 100 Cities Project and Mars Space Flight Facility are developing the open source application J-Earth, with the goal of providing this single platform, that facilitates data sharing, visualization, and analysis between researchers and applied practitioners around environmental and other sustainability challenges. This application is being designed for user communities including physical and social scientists, NASA researchers, non-governmental organizations, and decisionmakers to share and analyze data at multiple scales. We are initially focusing on urban heat island and urban ecology studies, with data and users from local to global levels. J-Earth is a Geographic Information System (GIS) that provides analytical tools for visualizing high-resolution and hyperspectral remote sensing imagery along with numeric and vector data. J-Earth is part of the JMARS (Java Mission-planning and Analysis for Remote Sensing) suite of tools which were first created to target NASA instruments on Mars and Lunar missions. Data can currently be incorporated in J-Earth at a scale of over 32,000 pixels per degree. Among other GIS functions, users can analyze trends along a transect line, or across vector regions, over multiple stacked numerical data layers and export their results. Open source tools, like J-Earth, are not only generally free or low-cost to users but provide the opportunity for users to contribute direction, functionality, and data standards to these projects. The flexible nature of open source projects often facilitates the incorporation of unique and emerging data sources, such as mobile phone data, sensor networks, croudsourced inputs, and social networking. The J-Earth team plans to incorporate datasources such as these with the feedback and participation of the user community.
The 2016 Bioinformatics Open Source Conference (BOSC)
Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
2016-01-01
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.
Gimli: open source and high-performance biomedical name recognition
2013-01-01
Background Automatic recognition of biomedical names is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. In recent years, various solutions have been implemented to tackle this problem. However, limitations regarding system characteristics, customization and usability still hinder their wider application outside text mining research. Results We present Gimli, an open-source, state-of-the-art tool for automatic recognition of biomedical names. Gimli includes an extended set of implemented and user-selectable features, such as orthographic, morphological, linguistic-based, conjunctions and dictionary-based. A simple and fast method to combine different trained models is also provided. Gimli achieves an F-measure of 87.17% on GENETAG and 72.23% on JNLPBA corpus, significantly outperforming existing open-source solutions. Conclusions Gimli is an off-the-shelf, ready to use tool for named-entity recognition, providing trained and optimized models for recognition of biomedical entities from scientific text. It can be used as a command line tool, offering full functionality, including training of new models and customization of the feature set and model parameters through a configuration file. Advanced users can integrate Gimli in their text mining workflows through the provided library, and extend or adapt its functionalities. Based on the underlying system characteristics and functionality, both for final users and developers, and on the reported performance results, we believe that Gimli is a state-of-the-art solution for biomedical NER, contributing to faster and better research in the field. Gimli is freely available at http://bioinformatics.ua.pt/gimli. PMID:23413997
Imaging spectroscopy of solar radio burst fine structures.
Kontar, E P; Yu, S; Kuznetsov, A A; Emslie, A G; Alcock, B; Jeffrey, N L S; Melnik, V N; Bian, N H; Subramanian, P
2017-11-15
Solar radio observations provide a unique diagnostic of the outer solar atmosphere. However, the inhomogeneous turbulent corona strongly affects the propagation of the emitted radio waves, so decoupling the intrinsic properties of the emitting source from the effects of radio wave propagation has long been a major challenge in solar physics. Here we report quantitative spatial and frequency characterization of solar radio burst fine structures observed with the Low Frequency Array, an instrument with high-time resolution that also permits imaging at scales much shorter than those corresponding to radio wave propagation in the corona. The observations demonstrate that radio wave propagation effects, and not the properties of the intrinsic emission source, dominate the observed spatial characteristics of radio burst images. These results permit more accurate estimates of source brightness temperatures, and open opportunities for quantitative study of the mechanisms that create the turbulent coronal medium through which the emitted radiation propagates.
An innovative use of instant messaging technology to support a library's single-service point.
Horne, Andrea S; Ragon, Bart; Wilson, Daniel T
2012-01-01
A library service model that provides reference and instructional services by summoning reference librarians from a single service point is described. The system utilizes Libraryh3lp, an open-source, multioperator instant messaging system. The selection and refinement of this solution and technical challenges encountered are explored, as is the design of public services around this technology, usage of the system, and best practices. This service model, while a major cultural and procedural change at first, is now a routine aspect of customer service for this library.
Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.
Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.
The very low angle detector for high-energy inelastic neutron scattering on the VESUVIO spectrometer
NASA Astrophysics Data System (ADS)
Perelli Cippo, E.; Gorini, G.; Tardocchi, M.; Pietropaolo, A.; Andreani, C.; Senesi, R.; Rhodes, N. J.; Schooneveld, E. M.
2008-05-01
The Very Low Angle Detector (VLAD) bank has been installed on the VESUVIO spectrometer at the ISIS spallation neutron source. The new device allows for high-energy inelastic neutron scattering measurements, at energies above 1 eV, maintaining the wave vector transfer lower than 10Å-1. This opens a still unexplored region of the kinematical (q, ω) space, enabling new and challenging experimental investigations in condensed matter. This paper describes the main instrumental features of the VLAD device, including instrument design, detector response, and calibration procedure.
Big Data Analytics in Medicine and Healthcare.
Ristevski, Blagoj; Chen, Ming
2018-05-10
This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.
OpenMx: An Open Source Extended Structural Equation Modeling Framework
ERIC Educational Resources Information Center
Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John
2011-01-01
OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…
NASA Astrophysics Data System (ADS)
Stewart, R.; Piburn, J.; Sorokine, A.; Myers, A.; Moehl, J.; White, D.
2015-07-01
The application of spatiotemporal (ST) analytics to integrated data from major sources such as the World Bank, United Nations, and dozens of others holds tremendous potential for shedding new light on the evolution of cultural, health, economic, and geopolitical landscapes on a global level. Realizing this potential first requires an ST data model that addresses challenges in properly merging data from multiple authors, with evolving ontological perspectives, semantical differences, and changing attributes, as well as content that is textual, numeric, categorical, and hierarchical. Equally challenging is the development of analytical and visualization approaches that provide a serious exploration of this integrated data while remaining accessible to practitioners with varied backgrounds. The WSTAMP project at Oak Ridge National Laboratory has yielded two major results in addressing these challenges: 1) development of the WSTAMP database, a significant advance in ST data modeling that integrates 10,000+ attributes covering over 200 nation states spanning over 50 years from over 30 major sources and 2) a novel online ST exploratory and analysis tool providing an array of modern statistical and visualization techniques for analyzing these data temporally, spatially, and spatiotemporally under a standard analytic workflow. We discuss the status of this work and report on major findings.
a Framework for AN Open Source Geospatial Certification Model
NASA Astrophysics Data System (ADS)
Khan, T. U. R.; Davis, P.; Behr, F.-J.
2016-06-01
The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105 respondents worldwide. 15 interviews (face-to-face or by telephone) with experts in different countries provided additional insights into Open Source usage and certification. The findings led to the development of a certification framework of three main categories with in total eleven sub-categories, i.e., "Certified Open Source Geospatial Data Associate / Professional", "Certified Open Source Geospatial Analyst Remote Sensing & GIS", "Certified Open Source Geospatial Cartographer", "Certified Open Source Geospatial Expert", "Certified Open Source Geospatial Associate Developer / Professional Developer", "Certified Open Source Geospatial Architect". Each certification is described by pre-conditions, scope and objectives, course content, recommended software packages, target group, expected benefits, and the methods of examination. Examinations can be flanked by proofs of professional career paths and achievements which need a peer qualification evaluation. After a couple of years a recertification is required. The concept seeks the accreditation by the OSGeo Foundation (and other bodies) and international support by a group of geospatial scientific institutions to achieve wide and international acceptance for this Open Source geospatial certification model. A business case for Open Source certification and a corresponding SWOT model is examined to support the goals of the Geo-For-All initiative of the ICA-OSGeo pact.
Turning Participatory Microbiome Research into Usable Data: Lessons from the American Gut Project
Debelius, Justine W.; Vázquez-Baeza, Yoshiki; McDonald, Daniel; Xu, Zhenjiang; Wolfe, Elaine; Knight, Rob
2016-01-01
The role of the human microbiome is the subject of continued investigation resulting in increased understanding. However, current microbiome research has only scratched the surface of the variety of healthy microbiomes. Public participation in science through crowdsourcing and crowdfunding microbiome research provides a novel opportunity for both participants and investigators. However, turning participatory science into publishable data can be challenging. Clear communication with the participant base and among researchers can ameliorate some challenges. Three major aspects need to be considered: recruitment and ongoing interaction, sample collection, and data analysis. Usable data can be maximized through diligent participant interaction, careful survey design, and maintaining an open source pipeline. While participatory science will complement rather than replace traditional avenues, it presents new opportunities for studies in the microbiome and beyond. PMID:27047589
Turning Participatory Microbiome Research into Usable Data: Lessons from the American Gut Project.
Debelius, Justine W; Vázquez-Baeza, Yoshiki; McDonald, Daniel; Xu, Zhenjiang; Wolfe, Elaine; Knight, Rob
2016-03-01
The role of the human microbiome is the subject of continued investigation resulting in increased understanding. However, current microbiome research has only scratched the surface of the variety of healthy microbiomes. Public participation in science through crowdsourcing and crowdfunding microbiome research provides a novel opportunity for both participants and investigators. However, turning participatory science into publishable data can be challenging. Clear communication with the participant base and among researchers can ameliorate some challenges. Three major aspects need to be considered: recruitment and ongoing interaction, sample collection, and data analysis. Usable data can be maximized through diligent participant interaction, careful survey design, and maintaining an open source pipeline. While participatory science will complement rather than replace traditional avenues, it presents new opportunities for studies in the microbiome and beyond.
Photon-in photon-out hard X-ray spectroscopy at the Linac Coherent Light Source
Alonso-Mori, Roberto; Sokaras, Dimosthenis; Zhu, Diling; ...
2015-04-15
X-ray free-electron lasers (FELs) have opened unprecedented possibilities to study the structure and dynamics of matter at an atomic level and ultra-fast timescale. Many of the techniques routinely used at storage ring facilities are being adapted for experiments conducted at FELs. In order to take full advantage of these new sources several challenges have to be overcome. They are related to the very different source characteristics and its resulting impact on sample delivery, X-ray optics, X-ray detection and data acquisition. Here it is described how photon-in photon-out hard X-ray spectroscopy techniques can be applied to study the electronic structure andmore » its dynamics of transition metal systems with ultra-bright and ultra-short FEL X-ray pulses. In particular, some of the experimental details that are different compared with synchrotron-based setups are discussed and illustrated by recent measurements performed at the Linac Coherent Light Source.« less
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Data Mashups: Potential Contribution to Decision Support on Climate Change and Health
Fleming, Lora E.; Haines, Andy; Golding, Brian; Kessel, Anthony; Cichowska, Anna; Sabel, Clive E.; Depledge, Michael H.; Sarran, Christophe; Osborne, Nicholas J.; Whitmore, Ceri; Cocksedge, Nicola; Bloomfield, Daniel
2014-01-01
Linking environmental, socioeconomic and health datasets provides new insights into the potential associations between climate change and human health and wellbeing, and underpins the development of decision support tools that will promote resilience to climate change, and thus enable more effective adaptation. This paper outlines the challenges and opportunities presented by advances in data collection, storage, analysis, and access, particularly focusing on “data mashups”. These data mashups are integrations of different types and sources of data, frequently using open application programming interfaces and data sources, to produce enriched results that were not necessarily the original reason for assembling the raw source data. As an illustration of this potential, this paper describes a recently funded initiative to create such a facility in the UK for use in decision support around climate change and health, and provides examples of suitable sources of data and the purposes to which they can be directed, particularly for policy makers and public health decision makers. PMID:24499879
Data mashups: potential contribution to decision support on climate change and health.
Fleming, Lora E; Haines, Andy; Golding, Brian; Kessel, Anthony; Cichowska, Anna; Sabel, Clive E; Depledge, Michael H; Sarran, Christophe; Osborne, Nicholas J; Whitmore, Ceri; Cocksedge, Nicola; Bloomfield, Daniel
2014-02-04
Linking environmental, socioeconomic and health datasets provides new insights into the potential associations between climate change and human health and wellbeing, and underpins the development of decision support tools that will promote resilience to climate change, and thus enable more effective adaptation. This paper outlines the challenges and opportunities presented by advances in data collection, storage, analysis, and access, particularly focusing on "data mashups". These data mashups are integrations of different types and sources of data, frequently using open application programming interfaces and data sources, to produce enriched results that were not necessarily the original reason for assembling the raw source data. As an illustration of this potential, this paper describes a recently funded initiative to create such a facility in the UK for use in decision support around climate change and health, and provides examples of suitable sources of data and the purposes to which they can be directed, particularly for policy makers and public health decision makers.
ERIC Educational Resources Information Center
Guhlin, Miguel
2007-01-01
Open source has continued to evolve and in the past three years the development of a graphical user interface has made it increasingly accessible and viable for end users without special training. Open source relies to a great extent on the free software movement. In this context, the term free refers not to cost, but to the freedom users have to…
GeNN: a code generation framework for accelerated brain simulations
NASA Astrophysics Data System (ADS)
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations.
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-07
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369
Increasing Flight Software Reuse with OpenSatKit
NASA Technical Reports Server (NTRS)
McComas, David
2018-01-01
In January 2015 the NASA Goddard Space Flight Center (GSFC) released the Core Flight System (cFS) as open source under the NASA Open Source Agreement (NOSA) license. The cFS is based on flight software (FSW) developed for 12 spacecraft spanning nearly two decades of effort and it can provide about a third of the FSW functionality for a low-earth orbiting scientific spacecraft. The cFS is a FSW framework that is portable, configurable, and extendable using a product line deployment model. However, the components are maintained separately so the user must configure, integrate, and deploy them as a cohesive functional system. This can be very challenging especially for organizations such as universities building cubesats that have minimal experience developing FSW. Supporting universities was one of the primary motivators for releasing the cFS under NOSA. This paper describes the OpenSatKit that was developed to address the cFS deployment challenges and to serve as a cFS training platform for new users. It provides a fully functional out-of-the box software system that includes NASA's cFS, Ball Aerospaceâ€"TM"s command and control system COSMOS, and a NASA dynamic simulator called 42. The kit is freely available since all of the components have been released as open source. The kit runs on a Linux platform, includes 8 cFS applications, several kit-specific applications, and built in demos illustrating how to use key application features. It also includes the software necessary to port the cFS to a Raspberry Pi and instructions for configuring COSMOS to communicate with the target. All of the demos and test scripts can be rerun unchanged with the cFS running on the Raspberry Pi. The cFS uses a 3-tiered layered architecture including a platform abstraction layer, a Core Flight Executive (cFE) middle layer, and an application layer. Similar to smart phones, the cFS application layer is the key architectural feature for userâ€"TM"s to extend the FSW functionality to meet their mission-specific requirements. The platform abstraction layer and the cFE layers go a step further than smart phones by providing a platform-agnostic Application Programmer Interface (API) that allows applications to run unchanged on different platforms. OpenSatKit can serve two significant architectural roles that will further help the adoption of the cFS and help create a community of users that can share assets. First, the kit is being enhanced to automate the integration of applications with the goal of creating a virtual cFS 'App Store'. Second, a platform certification test suite can be developed that would allow users to verify the port of the cFS to a new platform. This paper will describe the current state of these efforts and future plans.
SolTrace | Concentrating Solar Power | NREL
NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted
When Free Isn't Free: The Realities of Running Open Source in School
ERIC Educational Resources Information Center
Derringer, Pam
2009-01-01
Despite the last few years' growth in awareness of open-source software in schools and the potential savings it represents, its widespread adoption is still hampered. Randy Orwin, technology director of the Bainbridge Island School District in Washington State and a strong open-source advocate, cautions that installing an open-source…
78 FR 19799 - United States Mint Kids' Baseball Coin Design Challenge
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-02
... DEPARTMENT OF THE TREASURY United States Mint United States Mint Kids' Baseball Coin Design Challenge ACTION: Notification of the Opening of the United States Mint Kids' Baseball Coin Design Challenge on April 11, 2013. SUMMARY: The United States Mint announces the opening of a national kids' baseball...
NASA Astrophysics Data System (ADS)
Burba, George; Anderson, Tyler; Biraud, Sebastien; Caulton, Dana; von Fischer, Joe; Gioli, Beniamino; Hanson, Chad; Ham, Jay; Kohnert, Katrin; Larmanou, Eric; Levy, Peter; Polidori, Andrea; Pikelnaya, Olga; Sachs, Torsten; Serafimovich, Andrei; Zaldei, Alessandro; Zondlo, Mark; Zulueta, Rommel
2017-04-01
Methane plays a critical role in the radiation balance, chemistry of the atmosphere, and air quality. The major anthropogenic sources of methane include oil and gas development sites, natural gas distribution networks, landfill emissions, and agricultural production. The majority of oil and gas and urban methane emission occurs via variable-rate point sources or diffused spots in topographically challenging terrains (e.g., street tunnels, elevated locations at water treatment plants, vents, etc.). Locating and measuring such methane emissions is challenging when using traditional micrometeorological techniques, and requires development of novel approaches. Landfill methane emissions traditionally assessed at monthly or longer time intervals are subject to large uncertainties because of the snapshot nature of the measurements and the barometric pumping phenomenon. The majority of agricultural and natural methane production occurs in areas with little infrastructure or easily available grid power (e.g., rice fields, arctic and boreal wetlands, tropical mangroves, etc.). A lightweight, high-speed, high-resolution, open-path technology was recently developed for eddy covariance measurements of methane flux, with power consumption 30-150 times below other available technologies. It was designed to run on solar panels or a small generator and be placed in the middle of the methane-producing ecosystem without a need for grid power. Lately, this instrumentation has been utilized increasingly more frequently outside of the traditional use on stationary flux towers. These novel approaches include measurements from various moving platforms, such as cars, aircraft, and ships. Projects included mapping of concentrations and vertical profiles, leak detection and quantification, mobile emission detection from natural gas-powered cars, soil methane flux surveys, etc. This presentation will describe the latest state of the key projects utilizing the novel lightweight low-power high-resolution open-path technology, and will highlight several novel approaches where such instrumentation was used in mobile deployments in urban, agricultural and natural environments by academic institutions, regulatory agencies and industry.
Mobile Measurements of Methane Using High-Speed Open-Path Technology
NASA Astrophysics Data System (ADS)
Burba, G. G.; Anderson, T.; Ediger, K.; von Fischer, J.; Gioli, B.; Ham, J. M.; Hupp, J. R.; Kohnert, K.; Levy, P. E.; Polidori, A.; Pikelnaya, O.; Price, E.; Sachs, T.; Serafimovich, A.; Zondlo, M. A.; Zulueta, R. C.
2016-12-01
Methane plays a critical role in the radiation balance, chemistry of the atmosphere, and air quality. The major anthropogenic sources of CH4 include oil and gas development sites, natural gas distribution networks, landfill emissions, and agricultural production. The majority of oil and gas and urban CH4 emission occurs via variable-rate point sources or diffused spots in topographically challenging terrains (e.g., street tunnels, elevated locations at water treatment plants, vents, etc.). Locating and measuring such CH4 emissions is challenging when using traditional micrometeorological techniques, and requires development of novel approaches. Landfill CH4 emissions traditionally assessed at monthly or longer time intervals are subject to large uncertainties because of the snapshot nature of the measurements and the barometric pumping phenomenon. The majority of agricultural and natural CH4 production occurs in areas with little infrastructure or easily available grid power (e.g., rice fields, arctic and boreal wetlands, tropical mangroves, etc.). A lightweight, high-speed, high-resolution, open-path technology was recently developed for eddy covariance measurements of CH4 flux, with power consumption 30-150 times below other available technologies. It was designed to run on solar panels or a small generator and be placed in the middle of the methane-producing ecosystem without a need for grid power. Lately, this instrumentation has been utilized increasingly more frequently outside of the traditional use on stationary flux towers. These novel approaches include measurements from various moving platforms, such as cars, aircraft, and ships. Projects included mapping of concentrations and vertical profiles, leak detection and quantification, mobile emission detection from natural gas-powered cars, soil CH4 flux surveys, etc. This presentation will describe key projects utilizing the novel lightweight low-power high-resolution open-path technology, and will highlight several novel approaches where such instrumentation was used in mobile deployments in urban, agricultural and natural environments by academic institutions, regulatory agencies and industry.
OMPC: an Open-Source MATLAB®-to-Python Compiler
Jurica, Peter; van Leeuwen, Cees
2008-01-01
Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577
Nutrition considerations for open-water swimming.
Shaw, Gregory; Koivisto, Anu; Gerrard, David; Burke, Louise M
2014-08-01
Open-water swimming (OWS) is a rapidly developing discipline. Events of 5-25 km are featured at FINA World Championships, and the international circuit includes races of 5-88 km. The Olympic OWS event, introduced in 2008, is contested over 10 km. Differing venues present changing environmental conditions, including water and ambient temperatures, humidity, solar radiation, and unpredictable tides. Furthermore, the duration of most OWS events (1-6 hr) creates unique physiological challenges to thermoregulation, hydration status, and muscle fuel stores. Current nutrition recommendations for open-water training and competition are either an extension of recommendations from pool swimming or are extrapolated from other athletic populations with similar physiological requirements. Competition nutrition should focus on optimizing prerace hydration and glycogen stores. Although swimmers should rely on self-supplied fuel and fluid sources for shorter events, for races of 10 km or greater, fluid and fuel replacement can occur from feeding pontoons when tactically appropriate. Over the longer races, feeding pontoons should be used to achieve desirable targets of up to 90 g/ hr of carbohydrates from multitransportable sources. Exposure to variable water and ambient temperatures will play a significant role in determining race nutrition strategies. For example, in extreme environments, thermoregulation may be assisted by manipulating the temperature of the ingested fluids. Swimmers are encouraged to work with nutrition experts to develop effective and efficient strategies that enhance performance through appropriate in-competition nutrition.
NASA Astrophysics Data System (ADS)
Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.
2018-02-01
The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.
A small, lightweight multipollutant sensor system for ground ...
Characterizing highly dynamic, transient, and vertically lofted emissions from open area sources poses unique measurement challenges. This study developed and applied a multipollutant sensor and integrated sampler system for use on mobile applications including tethered balloons (aerostats) and unmanned aerial vehicles (UAVs). The system is particularly applicable to open area sources, such as forest fires, due to its light weight (3.5 kg), compact size (6.75 L), and internal power supply. The sensor system, termed “Kolibri”, consists of sensors measuring CO2 and CO, and samplers for particulate matter (PM) and volatile organic compounds (VOCs). The Kolibri is controlled by a microcontroller which can record and transfer data in real time through a radio module. Selection of the sensors was based on laboratory testing for accuracy, response delay and recovery, cross-sensitivity, and precision. The Kolibri was compared against rack-mounted continuous emissions monitoring system (CEMs) and another mobile sampling instrument (the “Flyer”) that has been used in over ten open area pollutant sampling events. Our results showed that the time series of CO, CO2, and PM2.5 concentrations measured by the Kolibri agreed well with those from the CEMs and the Flyer, with a laboratory-tested percentage error of 4.9%, 3%, and 5.8%, respectively. The VOC emission factors obtained using the Kolibri were consistent with existing literature values that relate concentration
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.
Seid-Karbasi, Puya; Ye, Xin C; Zhang, Allen W; Gladish, Nicole; Cheng, Suzanne Y S; Rothe, Katharina; Pilsworth, Jessica A; Kang, Min A; Doolittle, Natalie; Jiang, Xiaoyan; Stirling, Peter C; Wasserman, Wyeth W
2017-03-01
Student creation of educational materials has the capacity both to enhance learning and to decrease costs. Three successive honors-style classes of undergraduate students in a cancer genetics class worked with a new software system, CuboCube, to create an e-textbook. CuboCube is an open-source learning materials creation system designed to facilitate e-textbook development, with an ultimate goal of improving the social learning experience for students. Equipped with crowdsourcing capabilities, CuboCube provides intuitive tools for nontechnical and technical authors alike to create content together in a structured manner. The process of e-textbook development revealed both strengths and challenges of the approach, which can inform future efforts. Both the CuboCube platform and the Cancer Genetics E-textbook are freely available to the community.
Seid-Karbasi, Puya; Ye, Xin C.; Zhang, Allen W.; Gladish, Nicole; Cheng, Suzanne Y. S.; Rothe, Katharina; Pilsworth, Jessica A.; Kang, Min A.; Doolittle, Natalie; Jiang, Xiaoyan; Stirling, Peter C.; Wasserman, Wyeth W.
2017-01-01
Student creation of educational materials has the capacity both to enhance learning and to decrease costs. Three successive honors-style classes of undergraduate students in a cancer genetics class worked with a new software system, CuboCube, to create an e-textbook. CuboCube is an open-source learning materials creation system designed to facilitate e-textbook development, with an ultimate goal of improving the social learning experience for students. Equipped with crowdsourcing capabilities, CuboCube provides intuitive tools for nontechnical and technical authors alike to create content together in a structured manner. The process of e-textbook development revealed both strengths and challenges of the approach, which can inform future efforts. Both the CuboCube platform and the Cancer Genetics E-textbook are freely available to the community. PMID:28267757
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-14
... contracts before commercial sources in the open market. The proposed rule amends FAR 8.002 as follows: The... requirements for supplies and services from commercial sources in the open market. The proposed FAR 8.004 would... subpart 8.6). (b) Commercial sources (including educational and non-profit institutions) in the open...
de Souza, Andrea; Bittker, Joshua A; Lahr, David L; Brudz, Steve; Chatwin, Simon; Oprea, Tudor I; Waller, Anna; Yang, Jeremy J; Southall, Noel; Guha, Rajarshi; Schürer, Stephan C; Vempati, Uma D; Southern, Mark R; Dawson, Eric S; Clemons, Paul A; Chung, Thomas D Y
2014-06-01
Recent industry-academic partnerships involve collaboration among disciplines, locations, and organizations using publicly funded "open-access" and proprietary commercial data sources. These require the effective integration of chemical and biological information from diverse data sources, which presents key informatics, personnel, and organizational challenges. The BioAssay Research Database (BARD) was conceived to address these challenges and serve as a community-wide resource and intuitive web portal for public-sector chemical-biology data. Its initial focus is to enable scientists to more effectively use the National Institutes of Health Roadmap Molecular Libraries Program (MLP) data generated from the 3-year pilot and 6-year production phases of the Molecular Libraries Probe Production Centers Network (MLPCN), which is currently in its final year. BARD evolves the current data standards through structured assay and result annotations that leverage BioAssay Ontology and other industry-standard ontologies, and a core hierarchy of assay definition terms and data standards defined specifically for small-molecule assay data. We initially focused on migrating the highest-value MLP data into BARD and bringing it up to this new standard. We review the technical and organizational challenges overcome by the interdisciplinary BARD team, veterans of public- and private-sector data-integration projects, who are collaborating to describe (functional specifications), design (technical specifications), and implement this next-generation software solution. © 2014 Society for Laboratory Automation and Screening.
Evans, Nicholas G; Selgelid, Michael J
2015-08-01
In this article, we raise ethical concerns about the potential misuse of open-source biology (OSB): biological research and development that progresses through an organisational model of radical openness, deskilling, and innovation. We compare this organisational structure to that of the open-source software model, and detail salient ethical implications of this model. We demonstrate that OSB, in virtue of its commitment to openness, may be resistant to governance attempts.
Using R to implement spatial analysis in open source environment
NASA Astrophysics Data System (ADS)
Shao, Yixi; Chen, Dong; Zhao, Bo
2007-06-01
R is an open source (GPL) language and environment for spatial analysis, statistical computing and graphics which provides a wide variety of statistical and graphical techniques, and is highly extensible. In the Open Source environment it plays an important role in doing spatial analysis. So, to implement spatial analysis in the Open Source environment which we called the Open Source geocomputation is using the R data analysis language integrated with GRASS GIS and MySQL or PostgreSQL. This paper explains the architecture of the Open Source GIS environment and emphasizes the role R plays in the aspect of spatial analysis. Furthermore, one apt illustration of the functions of R is given in this paper through the project of constructing CZPGIS (Cheng Zhou Population GIS) supported by Changzhou Government, China. In this project we use R to implement the geostatistics in the Open Source GIS environment to evaluate the spatial correlation of land price and estimate it by Kriging Interpolation. We also use R integrated with MapServer and php to show how R and other Open Source software cooperate with each other in WebGIS environment, which represents the advantages of using R to implement spatial analysis in Open Source GIS environment. And in the end, we points out that the packages for spatial analysis in R is still scattered and the limited memory is still a bottleneck when large sum of clients connect at the same time. Therefore further work is to group the extensive packages in order or design normative packages and make R cooperate better with other commercial software such as ArcIMS. Also we look forward to developing packages for land price evaluation.
NASA Astrophysics Data System (ADS)
Zelt, C. A.
2017-12-01
Earth science attempts to understand how the earth works. This research often depends on software for modeling, processing, inverting or imaging. Freely sharing open-source software is essential to prevent reinventing the wheel and allows software to be improved and applied in ways the original author may never have envisioned. For young scientists, releasing software can increase their name ID when applying for jobs and funding, and create opportunities for collaborations when scientists who collect data want the software's creator to be involved in their project. However, we frequently hear scientists say software is a tool, it's not science. Creating software that implements a new or better way of earth modeling or geophysical processing, inverting or imaging should be viewed as earth science. Creating software for things like data visualization, format conversion, storage, or transmission, or programming to enhance computational performance, may be viewed as computer science. The former, ideally with an application to real data, can be published in earth science journals, the latter possibly in computer science journals. Citations in either case should accurately reflect the impact of the software on the community. Funding agencies need to support more software development and open-source releasing, and the community should give more high-profile awards for developing impactful open-source software. Funding support and community recognition for software development can have far reaching benefits when the software is used in foreseen and unforeseen ways, potentially for years after the original investment in the software development. For funding, an open-source release that is well documented should be required, with example input and output files. Appropriate funding will provide the incentive and time to release user-friendly software, and minimize the need for others to duplicate the effort. All funded software should be available through a single web site, ideally maintained by someone in a funded position. Perhaps the biggest challenge is the reality that researches who use software, as opposed to develop software, are more attractive university hires because they are more likely to be "big picture" scientists that publish in the highest profile journals, although sometimes the two go together.
GABBs: Cyberinfrastructure for Self-Service Geospatial Data Exploration, Computation, and Sharing
NASA Astrophysics Data System (ADS)
Song, C. X.; Zhao, L.; Biehl, L. L.; Merwade, V.; Villoria, N.
2016-12-01
Geospatial data are present everywhere today with the proliferation of location-aware computing devices. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. In addressing these needs, the Geospatial data Analysis Building Blocks (GABBs) project aims at building geospatial modeling, data analysis and visualization capabilities in an open source web platform, HUBzero. Funded by NSF's Data Infrastructure Building Blocks initiative, GABBs is creating a geospatial data architecture that integrates spatial data management, mapping and visualization, and interfaces in the HUBzero platform for scientific collaborations. The geo-rendering enabled Rappture toolkit, a generic Python mapping library, geospatial data exploration and publication tools, and an integrated online geospatial data management solution are among the software building blocks from the project. The GABBS software will be available through Amazon's AWS Marketplace VM images and open source. Hosting services are also available to the user community. The outcome of the project will enable researchers and educators to self-manage their scientific data, rapidly create GIS-enable tools, share geospatial data and tools on the web, and build dynamic workflows connecting data and tools, all without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the GABBs architecture, toolkits and libraries, and showcase the scientific use cases that utilize GABBs capabilities, as well as the challenges and solutions for GABBs to interoperate with other cyberinfrastructure platforms.
A light-driven artificial flytrap
Wani, Owies M.; Zeng, Hao; Priimagi, Arri
2017-01-01
The sophistication, complexity and intelligence of biological systems is a continuous source of inspiration for mankind. Mimicking the natural intelligence to devise tiny systems that are capable of self-regulated, autonomous action to, for example, distinguish different targets, remains among the grand challenges in biomimetic micro-robotics. Herein, we demonstrate an autonomous soft device, a light-driven flytrap, that uses optical feedback to trigger photomechanical actuation. The design is based on light-responsive liquid-crystal elastomer, fabricated onto the tip of an optical fibre, which acts as a power source and serves as a contactless probe that senses the environment. Mimicking natural flytraps, this artificial flytrap is capable of autonomous closure and object recognition. It enables self-regulated actuation within the fibre-sized architecture, thus opening up avenues towards soft, autonomous small-scale devices. PMID:28534872
Innovative Ways of Visualising Meta Data in 4D Using Open Source Libaries
NASA Astrophysics Data System (ADS)
Balhar, Jakub; Valach, Pavel; Veselka, Jonas; Voumard, Yann
2016-08-01
There are more and more data being measured by different Earth Observation satellites around the world. Ever increasing amount of these data present new challenges and opportunities for their visualization.In this paper we propose how to visualize the amount, distribution and the structure of the data in a transparent way, which will take into account time-dimensions as well. Our approach allows us to get a global overview as well detailed regional information about distribution of the products from EO observation missions.We focus on introducing our mobile-friendly and easy- to-use web mapping application for 4D visualization of the data. Apart from that we also present the Java application which can read and process the data from various data sources.
Ciobanu, O
2009-01-01
The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.
Portable open-path chemical sensor using a quantum cascade laser
NASA Astrophysics Data System (ADS)
Corrigan, Paul; Lwin, Maung; Huntley, Reuven; Chhabra, Amandeep; Moshary, Fred; Gross, Barry; Ahmed, Samir
2009-05-01
Remote sensing of enemy installations or their movements by trace gas detection is a critical but challenging military objective. Open path measurements over ranges of a few meters to many kilometers with sensitivity in the parts per million or billion regime are crucial in anticipating the presence of a threat. Previous approaches to detect ground level chemical plumes, explosive constituents, or combustion have relied on low-resolution, short range Fourier transform infrared spectrometer (FTIR), or low-sensitivity near-infrared differential optical absorption spectroscopy (DOAS). As mid-infrared quantum cascade laser (QCL) sources have improved in cost and performance, systems based on QCL's that can be tailored to monitor multiple chemical species in real time are becoming a viable alternative. We present the design of a portable, high-resolution, multi-kilometer open path trace gas sensor based on QCL technology. Using a tunable (1045-1047cm-1) QCL, a modeled atmosphere and link-budget analysis with commercial component specifications, we show that with this approach, accuracy in parts per billion ozone or ammonia can be obtained in seconds at path lengths up to 10 km. We have assembled an open-path QCL sensor based on this theoretical approach at City College of New York, and we present preliminary results demonstrating the potential of QCLs in open-path sensing applications.
openBIS: a flexible framework for managing and analyzing complex data in biology research
2011-01-01
Background Modern data generation techniques used in distributed systems biology research projects often create datasets of enormous size and diversity. We argue that in order to overcome the challenge of managing those large quantitative datasets and maximise the biological information extracted from them, a sound information system is required. Ease of integration with data analysis pipelines and other computational tools is a key requirement for it. Results We have developed openBIS, an open source software framework for constructing user-friendly, scalable and powerful information systems for data and metadata acquired in biological experiments. openBIS enables users to collect, integrate, share, publish data and to connect to data processing pipelines. This framework can be extended and has been customized for different data types acquired by a range of technologies. Conclusions openBIS is currently being used by several SystemsX.ch and EU projects applying mass spectrometric measurements of metabolites and proteins, High Content Screening, or Next Generation Sequencing technologies. The attributes that make it interesting to a large research community involved in systems biology projects include versatility, simplicity in deployment, scalability to very large data, flexibility to handle any biological data type and extensibility to the needs of any research domain. PMID:22151573
Boulos, Maged N Kamel; Honda, Kiyoshi
2006-01-01
Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699
Rapid development of medical imaging tools with open-source libraries.
Caban, Jesus J; Joshi, Alark; Nagy, Paul
2007-11-01
Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.
Open-Source RTOS Space Qualification: An RTEMS Case Study
NASA Technical Reports Server (NTRS)
Zemerick, Scott
2017-01-01
NASA space-qualification of reusable off-the-shelf real-time operating systems (RTOSs) remains elusive due to several factors notably (1) The diverse nature of RTOSs utilized across NASA, (2) No single NASA space-qualification criteria, lack of verification and validation (V&V) analysis, or test beds, and (3) different RTOS heritages, specifically open-source RTOSs and closed vendor-provided RTOSs. As a leader in simulation test beds, the NASA IV&V Program is poised to help jump-start and lead the space-qualification effort of the open source Real-Time Executive for Multiprocessor Systems (RTEMS) RTOS. RTEMS, as a case-study, can be utilized as an example of how to qualify all RTOSs, particularly the reusable non-commercial (open-source) ones that are gaining usage and popularity across NASA. Qualification will improve the overall safety and mission assurance of RTOSs for NASA-agency wide usage. NASA's involvement in space-qualification of an open-source RTOS such as RTEMS will drive the RTOS industry toward a more qualified and mature open-source RTOS product.
JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data
NASA Astrophysics Data System (ADS)
Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.
2017-09-01
Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.
NASA Astrophysics Data System (ADS)
Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael
2015-04-01
We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems. Geospatial capabilities for mapping, visualization, and spatial analysis will be important components of these new generation of Web-based interoperable information systems in the water domain. The purpose of this presentation is to increase the awareness of scientists, IT personnel and agency managers about the advantages offered by the combined use of Open Data, Open Specifications for geospatial and water-related data collection, storage and sharing, as well as mature FOSS projects for the creation of interoperable Web-based information systems in the water domain. A case study is used to illustrate how these principles and technologies can be integrated to create a system with the previously mentioned characteristics for managing and responding to flood events.
ERIC Educational Resources Information Center
Armbruster, Chris
2008-01-01
Open source, open content and open access are set to fundamentally alter the conditions of knowledge production and distribution. Open source, open content and open access are also the most tangible result of the shift towards e-science and digital networking. Yet, widespread misperceptions exist about the impact of this shift on knowledge…
Learning from hackers: open-source clinical trials.
Dunn, Adam G; Day, Richard O; Mandl, Kenneth D; Coiera, Enrico
2012-05-02
Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. A similar gap was addressed in the software industry by their open-source software movement. Here, we examine how the social and technical principles of the movement can guide the growth of an open-source clinical trial community.
Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Alok; Li, Lingda; Kong, Martin
Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less
Big Geo Data Management: AN Exploration with Social Media and Telecommunications Open Data
NASA Astrophysics Data System (ADS)
Arias Munoz, C.; Brovelli, M. A.; Corti, S.; Zamboni, G.
2016-06-01
The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.
Ammonia losses and nitrogen partitioning at a southern High Plains open lot dairy
NASA Astrophysics Data System (ADS)
Todd, Richard W.; Cole, N. Andy; Hagevoort, G. Robert; Casey, Kenneth D.; Auvermann, Brent W.
2015-06-01
Animal agriculture is a significant source of ammonia (NH3). Cattle excrete most ingested nitrogen (N); most urinary N is converted to NH3, volatilized and lost to the atmosphere. Open lot dairies on the southern High Plains are a growing industry and face environmental challenges as well as reporting requirements for NH3 emissions. We quantified NH3 emissions from the open lot and wastewater lagoons of a commercial New Mexico dairy during a nine-day summer campaign. The 3500-cow dairy consisted of open lot, manure-surfaced corrals (22.5 ha area). Lactating cows comprised 80% of the herd. A flush system using recycled wastewater intermittently removed manure from feeding alleys to three lagoons (1.8 ha area). Open path lasers measured atmospheric NH3 concentration, sonic anemometers characterized turbulence, and inverse dispersion analysis was used to quantify emissions. Ammonia fluxes (15-min) averaged 56 and 37 μg m-2 s-1 at the open lot and lagoons, respectively. Ammonia emission rate averaged 1061 kg d-1 at the open lot and 59 kg d-1 at the lagoons; 95% of NH3 was emitted from the open lot. The per capita emission rate of NH3 was 304 g cow-1 d-1 from the open lot (41% of N intake) and 17 g cow-1 d-1 from lagoons (2% of N intake). Daily N input at the dairy was 2139 kg d-1, with 43, 36, 19 and 2% of the N partitioned to NH3 emission, manure/lagoons, milk, and cows, respectively.
Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.
Zaidan, A A; Zaidan, B B; Al-Haiqi, Ahmed; Kiah, M L M; Hussain, Muzammil; Abdulnabi, Mohamed
2015-02-01
Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages. Copyright © 2014 Elsevier Inc. All rights reserved.
A robust scientific workflow for assessing fire danger levels using open-source software
NASA Astrophysics Data System (ADS)
Vitolo, Claudia; Di Giuseppe, Francesca; Smith, Paul
2017-04-01
Modelling forest fires is theoretically and computationally challenging because it involves the use of a wide variety of information, in large volumes and affected by high uncertainties. In-situ observations of wildfire, for instance, are highly sparse and need to be complemented by remotely sensed data measuring biomass burning to achieve homogeneous coverage at global scale. Fire models use weather reanalysis products to measure energy release and rate of spread but can only assess the potential predictability of fire danger as the actual ignition is due to human behaviour and, therefore, very unpredictable. Lastly, fire forecasting systems rely on weather forecasts to extend the advance warning but are currently calibrated using fire danger thresholds that are defined at global scale and do not take into account the spatial variability of fuel availability. As a consequence, uncertainties sharply increase cascading from the observational to the modelling stage and they might be further inflated by non-reproducible analyses. Although uncertainties in observations will only decrease with technological advances over the next decades, the other uncertainties (i.e. generated during modelling and post-processing) can already be addressed by developing transparent and reproducible analysis workflows, even more if implemented within open-source initiatives. This is because reproducible workflows aim to streamline the processing task as they present ready-made solutions to handle and manipulate complex and heterogeneous datasets. Also, opening the code to the scrutiny of other experts increases the chances to implement more robust solutions and avoids duplication of efforts. In this work we present our contribution to the forest fire modelling community: an open-source tool called "caliver" for the calibration and verification of forest fire model results. This tool is developed in the R programming language and publicly available under an open license. We will present the caliver R package, illustrate the main functionalities and show the results of our preliminary experiments calculating fire danger thresholds for various regions on Earth. We will compare these with the existing global thresholds and, lastly, demonstrate how these newly-calculated regional thresholds can lead to improved calibration of fire forecast models in an operational setting.
OMPC: an Open-Source MATLAB-to-Python Compiler.
Jurica, Peter; van Leeuwen, Cees
2009-01-01
Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.
NASA Astrophysics Data System (ADS)
Swetnam, T. L.; Pelletier, J. D.; Merchant, N.; Callahan, N.; Lyons, E.
2015-12-01
Earth science is making rapid advances through effective utilization of large-scale data repositories such as aerial LiDAR and access to NSF-funded cyberinfrastructures (e.g. the OpenTopography.org data portal, iPlant Collaborative, and XSEDE). Scaling analysis tasks that are traditionally developed using desktops, laptops or computing clusters to effectively leverage national and regional scale cyberinfrastructure pose unique challenges and barriers to adoption. To address some of these challenges in Fall 2014 an 'Applied Cyberinfrastructure Concepts' a project-based learning course (ISTA 420/520) at the University of Arizona focused on developing scalable models of 'Effective Energy and Mass Transfer' (EEMT, MJ m-2 yr-1) for use by the NSF Critical Zone Observatories (CZO) project. EEMT is a quantitative measure of the flux of available energy to the critical zone, and its computation involves inputs that have broad applicability (e.g. solar insolation). The course comprised of 25 students with varying level of computational skills and with no prior domain background in the geosciences, collaborated with domain experts to develop the scalable workflow. The original workflow relying on open-source QGIS platform on a laptop was scaled to effectively utilize cloud environments (Openstack), UA Campus HPC systems, iRODS, and other XSEDE and OSG resources. The project utilizes public data, e.g. DEMs produced by OpenTopography.org and climate data from Daymet, which are processed using GDAL, GRASS and SAGA and the Makeflow and Work-queue task management software packages. Students were placed into collaborative groups to develop the separate aspects of the project. They were allowed to change teams, alter workflows, and design and develop novel code. The students were able to identify all necessary dependencies, recompile source onto the target execution platforms, and demonstrate a functional workflow, which was further improved upon by one of the group leaders over Spring 2015. All of the code, documentation and workflow description are currently available on GitHub and a public data portal is in development. We present a case study of how students reacted to the challenge of a real science problem, their interactions with end-users, what went right, and what could be done better in the future.
The validity of open-source data when assessing jail suicides.
Thomas, Amanda L; Scott, Jacqueline; Mellow, Jeff
2018-05-09
The Bureau of Justice Statistics' Deaths in Custody Reporting Program is the primary source for jail suicide research, though the data is restricted from general dissemination. This study is the first to examine whether jail suicide data obtained from publicly available sources can help inform our understanding of this serious public health problem. Of the 304 suicides that were reported through the DCRP in 2009, roughly 56 percent (N = 170) of those suicides were identified through the open-source search protocol. Each of the sources was assessed based on how much information was collected on the incident and the types of variables available. A descriptive analysis was then conducted on the variables that were present in both data sources. The four variables present in each data source were: (1) demographic characteristics of the victim, (2) the location of occurrence within the facility, (3) the location of occurrence by state, and (4) the size of the facility. Findings demonstrate that the prevalence and correlates of jail suicides are extremely similar in both open-source and official data. However, for almost every variable measured, open-source data captured as much information as official data did, if not more. Further, variables not found in official data were identified in the open-source database, thus allowing researchers to have a more nuanced understanding of the situational characteristics of the event. This research provides support for the argument in favor of including open-source data in jail suicide research as it illustrates how open-source data can be used to provide additional information not originally found in official data. In sum, this research is vital in terms of possible suicide prevention, which may be directly linked to being able to manipulate environmental factors.
The HYPE Open Source Community
NASA Astrophysics Data System (ADS)
Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.
2013-12-01
The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code are delivered frequently. HYPE OSC is open to everyone interested in hydrology, hydrological modeling and code development - e.g. scientists, authorities, and consultancies. By joining the HYPE OSC you get access a state-of-the-art operational hydrological model. The HYPE source code is designed to efficiently handle large scale modeling for forecast, hindcast and climate applications. The code is under constant development to improve the hydrological processes, efficiency and readability. In the beginning of 2013 we released a version with new and better modularization based on hydrological processes. This will make the code easier to understand and further develop for a new user. An important challenge in this process is to produce code that is easy for anyone to understand and work with, but still maintain the properties that make the code efficient enough for large scale applications. Input from the HYPE Open Source Community is an important source for future improvements of the HYPE model. Therefore, by joining the community you become an active part of the development, get access to the latest features and can influence future versions of the model.
Kochanska, Grazyna; Kim, Sanghag; Nordling, Jamie Koenig
2013-01-01
The need for research on potential moderators of personality–parenting links has been repeatedly emphasized, yet few studies have examined how varying stressful or challenging circumstances may influence such links. We studied 186 diverse, low-income mother–toddler dyads. Mothers described themselves in terms of Big Five traits, were observed in lengthy interactions with their children, and provided parenting reports. Ecological adversity, assessed as a cumulative index of known risk factors, and the child’s difficulty observed as negative affect and defiance in interactions with mothers were posited as sources of parenting challenge. Mothers high in Neuroticism reported more power assertion. Some personality–parenting relations emerged only under challenging conditions. For mothers raising difficult children, higher Extraversion was linked to increased observed power assertion, but higher Conscientiousness was linked to decreased reported power assertion. There were no such relations for mothers of easy children. By contrast, some relations emerged only in the absence of challenge. Agreeableness was associated with more positive parenting for mothers who lived under conditions of low ecological adversity, and with less reported power for those who had easy children, and Openness was linked to more positive parenting for mothers of easy children. Those traits were unrelated to parenting under challenging conditions. PMID:23066882
Intellectual Property, Digital Technology and the Developing World
NASA Astrophysics Data System (ADS)
Pupillo, Lorenzo Maria
This chapter provides an overview of how the converging ICTs are challenging the traditional off-line copyright doctrine and suggests how developing countries should approach issues such as copyright in the digital world, software (Protection, Open Source, Reverse Engineering), and data base protection. The balance of the chapter is organized into three sections. After the introduction, the second section explains how digital technology is dramatically changing the entertainment industry, what are the major challenges to the industry, and what are the approaches that the economic literature suggest to face the structural changes that the digital revolution is bringing forward. Starting from the assumption that IPRs frameworks need to be customized to the countries’ development needs, the third section makes recommendations on how developing countries should use copyright to support access to information and to creative industries.
More than Meets the Eye - Infrared Cameras in Open-Ended University Thermodynamics Labs
NASA Astrophysics Data System (ADS)
Melander, Emil; Haglund, Jesper; Weiszflog, Matthias; Andersson, Staffan
2016-12-01
Educational research has found that students have challenges understanding thermal science. Undergraduate physics students have difficulties differentiating basic thermal concepts, such as heat, temperature, and internal energy. Engineering students have been found to have difficulties grasping surface emissivity as a thermal material property. One potential source of students' challenges with thermal science is the lack of opportunity to visualize energy transfer in intuitive ways with traditional measurement equipment. Thermodynamics laboratories have typically depended on point measures of temperature by use of thermometers (detecting heat conduction) or pyrometers (detecting heat radiation). In contrast, thermal imaging by means of an infrared (IR) camera provides a real-time, holistic image. Here we provide some background on IR cameras and their uses in education, and summarize five qualitative investigations that we have used in our courses.
Privacy preserving processing of genomic data: A survey.
Akgün, Mete; Bayrak, A Osman; Ozer, Bugra; Sağıroğlu, M Şamil
2015-08-01
Recently, the rapid advance in genome sequencing technology has led to production of huge amount of sensitive genomic data. However, a serious privacy challenge is confronted with increasing number of genetic tests as genomic data is the ultimate source of identity for humans. Lately, privacy threats and possible solutions regarding the undesired access to genomic data are discussed, however it is challenging to apply proposed solutions to real life problems due to the complex nature of security definitions. In this review, we have categorized pre-existing problems and corresponding solutions in more understandable and convenient way. Additionally, we have also included open privacy problems coming with each genomic data processing procedure. We believe our classification of genome associated privacy problems will pave the way for linking of real-life problems with previously proposed methods. Copyright © 2015 Elsevier Inc. All rights reserved.
Open source tools and toolkits for bioinformatics: significance, and where are we?
Stajich, Jason E; Lapp, Hilmar
2006-09-01
This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.
Open Source 2010: Reflections on 2007
ERIC Educational Resources Information Center
Wheeler, Brad
2007-01-01
Colleges and universities and commercial firms have demonstrated great progress in realizing the vision proffered for "Open Source 2007," and 2010 will mark even greater progress. Although much work remains in refining open source for higher education applications, the signals are now clear: the collaborative development of software can provide…
Development and Use of an Open-Source, User-Friendly Package to Simulate Voltammetry Experiments
ERIC Educational Resources Information Center
Wang, Shuo; Wang, Jing; Gao, Yanjing
2017-01-01
An open-source electrochemistry simulation package has been developed that simulates the electrode processes of four reaction mechanisms and two typical electroanalysis techniques: cyclic voltammetry and chronoamperometry. Unlike other open-source simulation software, this package balances the features with ease of learning and implementation and…
Creating Open Source Conversation
ERIC Educational Resources Information Center
Sheehan, Kate
2009-01-01
Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…
Integrating an Automatic Judge into an Open Source LMS
ERIC Educational Resources Information Center
Georgouli, Katerina; Guerreiro, Pedro
2011-01-01
This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…
76 FR 75875 - Defense Federal Acquisition Regulation Supplement; Open Source Software Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... Regulation Supplement; Open Source Software Public Meeting AGENCY: Defense Acquisition Regulations System... initiate a dialogue with industry regarding the use of open source software in DoD contracts. DATES: Public... be held in the General Services Administration (GSA), Central Office Auditorium, 1800 F Street NW...
The open-source movement: an introduction for forestry professionals
Patrick Proctor; Paul C. Van Deusen; Linda S. Heath; Jeffrey H. Gove
2005-01-01
In recent years, the open-source movement has yielded a generous and powerful suite of software and utilities that rivals those developed by many commercial software companies. Open-source programs are available for many scientific needs: operating systems, databases, statistical analysis, Geographic Information System applications, and object-oriented programming....
Open Source Software Development and Lotka's Law: Bibliometric Patterns in Programming.
ERIC Educational Resources Information Center
Newby, Gregory B.; Greenberg, Jane; Jones, Paul
2003-01-01
Applies Lotka's Law to metadata on open source software development. Authoring patterns found in software development productivity are found to be comparable to prior studies of Lotka's Law for scientific and scholarly publishing, and offer promise in predicting aggregate behavior of open source developers. (Author/LRW)
Conceptualization and validation of an open-source closed-loop deep brain stimulation system in rat.
Wu, Hemmings; Ghekiere, Hartwin; Beeckmans, Dorien; Tambuyzer, Tim; van Kuyck, Kris; Aerts, Jean-Marie; Nuttin, Bart
2015-04-21
Conventional deep brain stimulation (DBS) applies constant electrical stimulation to specific brain regions to treat neurological disorders. Closed-loop DBS with real-time feedback is gaining attention in recent years, after proved more effective than conventional DBS in terms of pathological symptom control clinically. Here we demonstrate the conceptualization and validation of a closed-loop DBS system using open-source hardware. We used hippocampal theta oscillations as system input, and electrical stimulation in the mesencephalic reticular formation (mRt) as controller output. It is well documented that hippocampal theta oscillations are highly related to locomotion, while electrical stimulation in the mRt induces freezing. We used an Arduino open-source microcontroller between input and output sources. This allowed us to use hippocampal local field potentials (LFPs) to steer electrical stimulation in the mRt. Our results showed that closed-loop DBS significantly suppressed locomotion compared to no stimulation, and required on average only 56% of the stimulation used in open-loop DBS to reach similar effects. The main advantages of open-source hardware include wide selection and availability, high customizability, and affordability. Our open-source closed-loop DBS system is effective, and warrants further research using open-source hardware for closed-loop neuromodulation.
Conceptualization and validation of an open-source closed-loop deep brain stimulation system in rat
Wu, Hemmings; Ghekiere, Hartwin; Beeckmans, Dorien; Tambuyzer, Tim; van Kuyck, Kris; Aerts, Jean-Marie; Nuttin, Bart
2015-01-01
Conventional deep brain stimulation (DBS) applies constant electrical stimulation to specific brain regions to treat neurological disorders. Closed-loop DBS with real-time feedback is gaining attention in recent years, after proved more effective than conventional DBS in terms of pathological symptom control clinically. Here we demonstrate the conceptualization and validation of a closed-loop DBS system using open-source hardware. We used hippocampal theta oscillations as system input, and electrical stimulation in the mesencephalic reticular formation (mRt) as controller output. It is well documented that hippocampal theta oscillations are highly related to locomotion, while electrical stimulation in the mRt induces freezing. We used an Arduino open-source microcontroller between input and output sources. This allowed us to use hippocampal local field potentials (LFPs) to steer electrical stimulation in the mRt. Our results showed that closed-loop DBS significantly suppressed locomotion compared to no stimulation, and required on average only 56% of the stimulation used in open-loop DBS to reach similar effects. The main advantages of open-source hardware include wide selection and availability, high customizability, and affordability. Our open-source closed-loop DBS system is effective, and warrants further research using open-source hardware for closed-loop neuromodulation. PMID:25897892
ERIC Educational Resources Information Center
Guhlin, Miguel
2007-01-01
A switch to free open source software can minimize cost and allow funding to be diverted to equipment and other programs. For instance, the OpenOffice suite is an alternative to expensive basic application programs offered by major vendors. Many such programs on the market offer features seldom used in education but for which educators must pay.…
A New Framework for Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Schoenack, Lindsie
2013-01-01
The challenges that massive open online courses (MOOCs) bring to the learning arena spur adult educators to improve delivery. A framework for a new type of MOOC is presented to address some of the challenges presented by earlier models. This new MOOC, called a mesoMOOC, can bridge several challenges that hinder current effective delivery of MOOCs…
Transforming Education Research Through Open Video Data Sharing.
Gilmore, Rick O; Adolph, Karen E; Millman, David S; Gordon, Andrew
2016-01-01
Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants' identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants' privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning.
Transforming Education Research Through Open Video Data Sharing
Gilmore, Rick O.; Adolph, Karen E.; Millman, David S.; Gordon, Andrew
2016-01-01
Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants’ identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants’ privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning. PMID:28042361
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
Objective To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. Materials and methods The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals’ pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. Results In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. Conclusion A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. PMID:26063746
Open-source mobile digital platform for clinical trial data collection in low-resource settings.
van Dam, Joris; Omondi Onyango, Kevin; Midamba, Brian; Groosman, Nele; Hooper, Norman; Spector, Jonathan; Pillai, Goonaseelan Colin; Ogutu, Bernhards
2017-02-01
Governments, universities and pan-African research networks are building durable infrastructure and capabilities for biomedical research in Africa. This offers the opportunity to adopt from the outset innovative approaches and technologies that would be challenging to retrofit into fully established research infrastructures such as those regularly found in high-income countries. In this context we piloted the use of a novel mobile digital health platform, designed specifically for low-resource environments, to support high-quality data collection in a clinical research study. Our primary aim was to assess the feasibility of a using a mobile digital platform for clinical trial data collection in a low-resource setting. Secondarily, we sought to explore the potential benefits of such an approach. The investigative site was a research institute in Nairobi, Kenya. We integrated an open-source platform for mobile data collection commonly used in the developing world with an open-source, standard platform for electronic data capture in clinical trials. The integration was developed using common data standards (Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model), maximising the potential to extend the approach to other platforms. The system was deployed in a pharmacokinetic study involving healthy human volunteers. The electronic data collection platform successfully supported conduct of the study. Multidisciplinary users reported high levels of satisfaction with the mobile application and highlighted substantial advantages when compared with traditional paper record systems. The new system also demonstrated a potential for expediting data quality review. This pilot study demonstrated the feasibility of using a mobile digital platform for clinical research data collection in low-resource settings. Sustainable scientific capabilities and infrastructure are essential to attract and support clinical research studies. Since many research structures in Africa are being developed anew, stakeholders should consider implementing innovative technologies and approaches.
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals' pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
SubductionGenerator: A program to build three-dimensional plate configurations
NASA Astrophysics Data System (ADS)
Jadamec, M. A.; Kreylos, O.; Billen, M. I.; Turcotte, D. L.; Knepley, M.
2016-12-01
Geologic, geochemical, and geophysical data from subduction zones indicate that a two-dimensional paradigm for plate tectonic boundaries is no longer adequate to explain the observations. Many open source software packages exist to simulate the viscous flow of the Earth, such as the dynamics of subduction. However, there are few open source programs that generate the three-dimensional model input. We present an open source software program, SubductionGenerator, that constructs the three-dimensional initial thermal structure and plate boundary structure. A 3D model mesh and tectonic configuration are constructed based on a user specified model domain, slab surface, seafloor age grid file, and shear zone surface. The initial 3D thermal structure for the plates and mantle within the model domain is then constructed using a series of libraries within the code that use a half-space cooling model, plate cooling model, and smoothing functions. The code maps the initial 3D thermal structure and the 3D plate interface onto the mesh nodes using a series of libraries including a k-d tree to increase efficiency. In this way, complicated geometries and multiple plates with variable thickness can be built onto a multi-resolution finite element mesh with a 3D thermal structure and 3D isotropic shear zones oriented at any angle with respect to the grid. SubductionGenerator is aimed at model set-ups more representative of the earth, which can be particularly challenging to construct. Examples include subduction zones where the physical attributes vary in space, such as slab dip and temperature, and overriding plate temperature and thickness. Thus, the program can been used to construct initial tectonic configurations for triple junctions and plate boundary corners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
DataMed - an open source discovery index for finding biomedical datasets.
Chen, Xiaoling; Gururaj, Anupama E; Ozyurt, Burak; Liu, Ruiling; Soysal, Ergin; Cohen, Trevor; Tiryaki, Firat; Li, Yueling; Zong, Nansu; Jiang, Min; Rogith, Deevakar; Salimi, Mandana; Kim, Hyeon-Eui; Rocca-Serra, Philippe; Gonzalez-Beltran, Alejandra; Farcas, Claudiu; Johnson, Todd; Margolis, Ron; Alter, George; Sansone, Susanna-Assunta; Fore, Ian M; Ohno-Machado, Lucila; Grethe, Jeffrey S; Xu, Hua
2018-01-13
Finding relevant datasets is important for promoting data reuse in the biomedical domain, but it is challenging given the volume and complexity of biomedical data. Here we describe the development of an open source biomedical data discovery system called DataMed, with the goal of promoting the building of additional data indexes in the biomedical domain. DataMed, which can efficiently index and search diverse types of biomedical datasets across repositories, is developed through the National Institutes of Health-funded biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium. It consists of 2 main components: (1) a data ingestion pipeline that collects and transforms original metadata information to a unified metadata model, called DatA Tag Suite (DATS), and (2) a search engine that finds relevant datasets based on user-entered queries. In addition to describing its architecture and techniques, we evaluated individual components within DataMed, including the accuracy of the ingestion pipeline, the prevalence of the DATS model across repositories, and the overall performance of the dataset retrieval engine. Our manual review shows that the ingestion pipeline could achieve an accuracy of 90% and core elements of DATS had varied frequency across repositories. On a manually curated benchmark dataset, the DataMed search engine achieved an inferred average precision of 0.2033 and a precision at 10 (P@10, the number of relevant results in the top 10 search results) of 0.6022, by implementing advanced natural language processing and terminology services. Currently, we have made the DataMed system publically available as an open source package for the biomedical community. © The Author 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Image Subtraction Reduction of Open Clusters M35 & NGC 2158 in the K2 Campaign 0 Super Stamps
NASA Astrophysics Data System (ADS)
Soares-Furtado, M.; Hartman, J. D.; Bakos, G. Á.; Huang, C. X.; Penev, K.; Bhatti, W.
2017-04-01
We observed the open clusters M35 and NGC 2158 during the initial K2 campaign (C0). Reducing these data to high-precision photometric timeseries is challenging due to the wide point-spread function (PSF) and the blending of stellar light in such dense regions. We developed an image-subtraction-based K2 reduction pipeline that is applicable to both crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 K2 super stamp, containing the two open clusters, as well as to the neighboring postage stamps. In this paper, we present our image subtraction reduction pipeline and demonstrate that this technique achieves ultra-high photometric precision for sources in the C0 super stamp. We extract the raw light curves of 3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for systematic effects. We compare our photometric results with the prior reductions published in the literature. For de-trended TFA-corrected sources in the 12-12.25 {{{K}}}{{p}} magnitude range, we achieve a best 6.5-hour window running rms of 35 ppm, falling to 100 ppm for fainter stars in the 14-14.25 {{{K}}}{{p}} magnitude range. For stars with {K}p> 14, our de-trended and 6.5-hour binned light curves achieve the highest photometric precision. Moreover, all our TFA-corrected sources have higher precision on all timescales investigated. This work represents the first published image subtraction analysis of a K2 super stamp. This method will be particularly useful for analyzing the Galactic bulge observations carried out during K2 campaign 9. The raw light curves and the final results of our de-trending processes are publicly available at http://k2.hatsurveys.org/archive/.
Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community
NASA Astrophysics Data System (ADS)
Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.
2016-12-01
The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
Coalescent: an open-source and scalable framework for exact calculations in coalescent theory
2012-01-01
Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878
Digital data collection in paleoanthropology.
Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay
2015-01-01
Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
NASA Astrophysics Data System (ADS)
Gunawardena, N.; Pardyjak, E. R.; Stoll, R.; Khadka, A.
2018-02-01
Over the last decade there has been a proliferation of low-cost sensor networks that enable highly distributed sensor deployments in environmental applications. The technology is easily accessible and rapidly advancing due to the use of open-source microcontrollers. While this trend is extremely exciting, and the technology provides unprecedented spatial coverage, these sensors and associated microcontroller systems have not been well evaluated in the literature. Given the large number of new deployments and proposed research efforts using these technologies, it is necessary to quantify the overall instrument and microcontroller performance for specific applications. In this paper, an Arduino-based weather station system is presented in detail. These low-cost energy-budget measurement stations, or LEMS, have now been deployed for continuous measurements as part of several different field campaigns, which are described herein. The LEMS are low-cost, flexible, and simple to maintain. In addition to presenting the technical details of the LEMS, its errors are quantified in laboratory and field settings. A simple artificial neural network-based radiation-error correction scheme is also presented. Finally, challenges and possible improvements to microcontroller-based atmospheric sensing systems are discussed.
Big Software for SmallSats: Adapting CFS to CubeSat Missions
NASA Technical Reports Server (NTRS)
Cudmore, Alan P.; Crum, Gary; Sheikh, Salman; Marshall, James
2015-01-01
Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS. Large parts of cFS are now open source, which has spurred adoption outside of NASA. This paper reports on the experiences of two teams using cFS for current CubeSat missions. The performance overheads of cFS are quantified, and the reusability of code between missions is discussed. The analysis shows that cFS is well suited to use on CubeSats and demonstrates the portability and modularity of cFS code.
Nektar++: An open-source spectral/ hp element framework
NASA Astrophysics Data System (ADS)
Cantwell, C. D.; Moxey, D.; Comerford, A.; Bolis, A.; Rocco, G.; Mengaldo, G.; De Grazia, D.; Yakovlev, S.; Lombard, J.-E.; Ekelschot, D.; Jordi, B.; Xu, H.; Mohamied, Y.; Eskilsson, C.; Nelson, B.; Vos, P.; Biotto, C.; Kirby, R. M.; Sherwin, S. J.
2015-07-01
Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/ hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy over low-order techniques at reduced computational cost for a given number of degrees of freedom. However, their proliferation is often limited by their complexity, which makes these methods challenging to implement and use. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectral/ hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains.
PharmacoGx: an R package for analysis of large pharmacogenomic datasets.
Smirnov, Petr; Safikhani, Zhaleh; El-Hachem, Nehme; Wang, Dong; She, Adrian; Olsen, Catharina; Freeman, Mark; Selby, Heather; Gendoo, Deena M A; Grossmann, Patrick; Beck, Andrew H; Aerts, Hugo J W L; Lupien, Mathieu; Goldenberg, Anna; Haibe-Kains, Benjamin
2016-04-15
Pharmacogenomics holds great promise for the development of biomarkers of drug response and the design of new therapeutic options, which are key challenges in precision medicine. However, such data are scattered and lack standards for efficient access and analysis, consequently preventing the realization of the full potential of pharmacogenomics. To address these issues, we implemented PharmacoGx, an easy-to-use, open source package for integrative analysis of multiple pharmacogenomic datasets. We demonstrate the utility of our package in comparing large drug sensitivity datasets, such as the Genomics of Drug Sensitivity in Cancer and the Cancer Cell Line Encyclopedia. Moreover, we show how to use our package to easily perform Connectivity Map analysis. With increasing availability of drug-related data, our package will open new avenues of research for meta-analysis of pharmacogenomic data. PharmacoGx is implemented in R and can be easily installed on any system. The package is available from CRAN and its source code is available from GitHub. bhaibeka@uhnresearch.ca or benjamin.haibe.kains@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Metadata management for high content screening in OMERO.
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R
2016-03-01
High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Open source libraries and frameworks for biological data visualisation: a guide for developers.
Wang, Rui; Perez-Riverol, Yasset; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-04-01
Recent advances in high-throughput experimental techniques have led to an exponential increase in both the size and the complexity of the data sets commonly studied in biology. Data visualisation is increasingly used as the key to unlock this data, going from hypothesis generation to model evaluation and tool implementation. It is becoming more and more the heart of bioinformatics workflows, enabling scientists to reason and communicate more effectively. In parallel, there has been a corresponding trend towards the development of related software, which has triggered the maturation of different visualisation libraries and frameworks. For bioinformaticians, scientific programmers and software developers, the main challenge is to pick out the most fitting one(s) to create clear, meaningful and integrated data visualisation for their particular use cases. In this review, we introduce a collection of open source or free to use libraries and frameworks for creating data visualisation, covering the generation of a wide variety of charts and graphs. We will focus on software written in Java, JavaScript or Python. We truly believe this software offers the potential to turn tedious data into exciting visual stories. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110
Open source integrated modeling environment Delta Shell
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.
2012-04-01
In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.
Open source libraries and frameworks for biological data visualisation: A guide for developers
Wang, Rui; Perez-Riverol, Yasset; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-01-01
Recent advances in high-throughput experimental techniques have led to an exponential increase in both the size and the complexity of the data sets commonly studied in biology. Data visualisation is increasingly used as the key to unlock this data, going from hypothesis generation to model evaluation and tool implementation. It is becoming more and more the heart of bioinformatics workflows, enabling scientists to reason and communicate more effectively. In parallel, there has been a corresponding trend towards the development of related software, which has triggered the maturation of different visualisation libraries and frameworks. For bioinformaticians, scientific programmers and software developers, the main challenge is to pick out the most fitting one(s) to create clear, meaningful and integrated data visualisation for their particular use cases. In this review, we introduce a collection of open source or free to use libraries and frameworks for creating data visualisation, covering the generation of a wide variety of charts and graphs. We will focus on software written in Java, JavaScript or Python. We truly believe this software offers the potential to turn tedious data into exciting visual stories. PMID:25475079
Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N.; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S.; Leswing, Karl
2017-01-01
Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm. PMID:29629118
How NASA is Building a Petabyte Scale Geospatial Archive in the Cloud
NASA Technical Reports Server (NTRS)
Pilone, Dan; Quinn, Patrick; Jazayeri, Alireza; Baynes, Kathleen; Murphy, Kevin J.
2018-01-01
NASA's Earth Observing System Data and Information System (EOSDIS) is working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This free and open source system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk outlines the motivation for this work, present the achievements and hurdles of the past 18 months and charts a course for the future expansion of Cumulus. We explore not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud. The NASA EOSDIS archive is currently at nearly 30 PBs and will grow to over 300PBs in the coming years. We've presented progress on this effort at AWS re:Invent and the American Geophysical Union (AGU) Fall Meeting in 2017 and hope to have the opportunity to share with FOSS4G attendees information on the availability of the open sourced software and how NASA intends on making its Earth Observing Geospatial data available for free to the public in the cloud.
Seithe, Mirko; Morina, Jeronim; Glöckner, Andreas
2016-12-01
The increased interest in complex-interactive behavior on the one hand and the cognitive and affective processes underlying behavior on the other are a challenge for researchers in psychology and behavioral economics. Research often necessitates that participants strategically interact with each other in dyads or groups. At the same time, to investigate the underlying cognitive and affective processes in a fine-grained manner, not only choices but also other variables such as decision time, information search, and pupil dilation should be recorded. The Bonn eXperimental System (BoXS) introduced in this article is an open-source platform that allows interactive as well as non-interactive experiments to be conducted while recording process measures very efficiently and completely browser-based. In the current version, BoXS has particularly been extended to enable conducting interactive eye-tracking and mouse-tracking experiments. One core advantage of BoXS is its simplicity. Using BoXS does not require prior installation for both experimenters and participants, which allows for running studies outside the laboratory and over the internet. Learning to program for BoXS is easy even for researchers without previous programming experience.
Macyszyn, Luke; Lega, Brad; Bohman, Leif-Erik; Latefi, Ahmad; Smith, Michelle J; Malhotra, Neil R; Welch, William; Grady, Sean M
2013-09-01
Digital radiology enhances productivity and results in long-term cost savings. However, the viewing, storage, and sharing of outside imaging studies on compact discs at ambulatory offices and hospitals pose a number of unique challenges to a surgeon's efficiency and clinical workflow. To improve the efficiency and clinical workflow of an academic neurosurgical practice when evaluating patients with outside radiological studies. Open-source software and commercial hardware were used to design and implement a departmental picture archiving and communications system (PACS). The implementation of a departmental PACS system significantly improved productivity and enhanced collaboration in a variety of clinical settings. Using published data on the rate of information technology problems associated with outside studies on compact discs, this system produced a cost savings ranging from $6250 to $33600 and from $43200 to $72000 for 2 cohorts, urgent transfer and spine clinic patients, respectively, therefore justifying the costs of the system in less than a year. The implementation of a departmental PACS system using open-source software is straightforward and cost-effective and results in significant gains in surgeon productivity when evaluating patients with outside imaging studies.
Open-source software platform for medical image segmentation applications
NASA Astrophysics Data System (ADS)
Namías, R.; D'Amato, J. P.; del Fresno, M.
2017-11-01
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K
2011-09-01
It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. Copyright © 2011 Elsevier Ltd. All rights reserved.
State-of-the-practice and lessons learned on implementing open data and open source policies.
DOT National Transportation Integrated Search
2012-05-01
This report describes the current government, academic, and private sector practices associated with open data and open source application development. These practices are identified; and the potential uses with the ITS Programs Data Capture and M...
Your Personal Analysis Toolkit - An Open Source Solution
NASA Astrophysics Data System (ADS)
Mitchell, T.
2009-12-01
Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!
All-source Information Management and Integration for Improved Collective Intelligence Production
2011-06-01
Intelligence (ELINT) • Open Source Intelligence ( OSINT ) • Technical Intelligence (TECHINT) These intelligence disciplines produce... intelligence , measurement and signature intelligence , signals intelligence , and open - source data, in the production of intelligence . All- source intelligence ...All- Source Information Integration and Management) R&D Project 3 All- Source Intelligence
ERIC Educational Resources Information Center
Gil-Jaurena, Inés; Domínguez, Daniel
2018-01-01
This article analyses the challenges teachers face when entering a digital and open online environment in higher education. Massive open online courses (MOOCs) have become a popular phenomenon, making online learning more visible in the educational agenda; therefore, it is appropriate to analyse their expansion and diversification to help inform…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maingi, Rajesh; Zinkle, Steven J.; Foster, Mark S.
2015-05-01
The realization of controlled thermonuclear fusion as an energy source would transform society, providing a nearly limitless energy source with renewable fuel. Under the auspices of the U.S. Department of Energy, the Fusion Energy Sciences (FES) program management recently launched a series of technical workshops to “seek community engagement and input for future program planning activities” in the targeted areas of (1) Integrated Simulation for Magnetic Fusion Energy Sciences, (2) Control of Transients, (3) Plasma Science Frontiers, and (4) Plasma-Materials Interactions aka Plasma-Materials Interface (PMI). Over the past decade, a number of strategic planning activities1-6 have highlighted PMI and plasmamore » facing components as a major knowledge gap, which should be a priority for fusion research towards ITER and future demonstration fusion energy systems. There is a strong international consensus that new PMI solutions are required in order for fusion to advance beyond ITER. The goal of the 2015 PMI community workshop was to review recent innovations and improvements in understanding the challenging PMI issues, identify high-priority scientific challenges in PMI, and to discuss potential options to address those challenges. The community response to the PMI research assessment was enthusiastic, with over 80 participants involved in the open workshop held at Princeton Plasma Physics Laboratory on May 4-7, 2015. The workshop provided a useful forum for the scientific community to review progress in scientific understanding achieved during the past decade, and to openly discuss high-priority unresolved research questions. One of the key outcomes of the workshop was a focused set of community-initiated Priority Research Directions (PRDs) for PMI. Five PRDs were identified, labeled A-E, which represent community consensus on the most urgent near-term PMI scientific issues. For each PRD, an assessment was made of the scientific challenges, as well as a set of actions to address those challenges. No prioritization was attempted amongst these five PRDs. We note that ITER, an international collaborative project to substantially extend fusion science and technology, is implicitly a driver and beneficiary of the research described in these PRDs; specific ITER issues are discussed in the background and PRD chapters. For succinctness, we describe these PRDs directly below; a brief introduction to magnetic fusion and the workshop process/timeline is given in Chapter I, and panelists are listed in the Appendix.« less
A dynamic model of stress and sustained attention
NASA Technical Reports Server (NTRS)
Hancock, P. A.; Warm, Joel S.
1989-01-01
Arguments are presented that an integrated view of stress and performance must consider the task demanding a sustained attention as a primary source of cognitive stress. A dynamic model is developed on the basis of the concept of adaptability in both physiological and psychological terms, that addresses the effects of stress on vigilance and, potentially, a wide variety of attention-demanding performance tasks. The model provides an insight into the failure of an operator under the driving influences of stress and opens a number of potential avenues through which solutions to the complex challenge of stress and performance might be posed.
How far can we go in hydrological modelling without any knowledge of runoff formation processes?
NASA Astrophysics Data System (ADS)
Ayzel, Georgy
2016-04-01
Hydrological modelling is a challenging scientific issue for the last 50 years and tend to be it further because of the highest level of runoff formation processes complexity at the different spatio-temporal scales. Enormous number of modelling-related papers have submitted to the top-ranked journals every year, but in this publication speed race authors have pay increasing attention to the models and data they use by itself rather than underlying watershed processes. Great community effort of the free and open-source models sharing with high availability of hydrometeorological data sources led to conceptual shifting paradigm of hydrological science to the technical-oriented direction. In the third-world countries this shifting is more clear by the reason of field studies absence and obligatory requirement of practical significance of the research supported by the government funds. As a result we get a state of hydrological modelling discipline closer to the aim of high Nash-Sutcliffe efficiency (NSE) achievement rather than watershed processes understanding. Both lumped physically-based land-surface model SWAP (Soil Water - Atmosphere - Plants) and SCE-UA (Shuffled Complex Evolution method developed at The University of Arizona) technique for robust model parameters search were used for the runoff modelling of 323 MOPEX watersheds. No one special data analysis and expert knowledge-based decisions were not performed. Median value of NSE is 0.652 and 90% of watersheds have efficiency bigger than 0.5. Thus without any information of particular features of each watershed satisfactory modelling results were obtained. To prove our conclusions we build cutting-edge conceptual rainfall-runoff model based on decision trees and adaptive boosting machine learning algorithms for the one small watershed in USA. No one special data analysis or feature engineering was not performed too. Obtained results demonstrate great model prediction power both for learning and testing periods (NSE > 0.95). The way we obtain our results is clear and direct: we used both open-source physically based and conceptual models coupled with open access data. However these results does not make a significant contribution to the hydrological cycle processes understanding. And not the hydrological modelling itself but the reason why and for what we do it is the most challenging issue for the future research.
Open source EMR software: profiling, insights and hands-on analysis.
Kiah, M L M; Haiqi, Ahmed; Zaidan, B B; Zaidan, A A
2014-11-01
The use of open source software in health informatics is increasingly advocated by authors in the literature. Although there is no clear evidence of the superiority of the current open source applications in the healthcare field, the number of available open source applications online is growing and they are gaining greater prominence. This repertoire of open source options is of a great value for any future-planner interested in adopting an electronic medical/health record system, whether selecting an existent application or building a new one. The following questions arise. How do the available open source options compare to each other with respect to functionality, usability and security? Can an implementer of an open source application find sufficient support both as a user and as a developer, and to what extent? Does the available literature provide adequate answers to such questions? This review attempts to shed some light on these aspects. The objective of this study is to provide more comprehensive guidance from an implementer perspective toward the available alternatives of open source healthcare software, particularly in the field of electronic medical/health records. The design of this study is twofold. In the first part, we profile the published literature on a sample of existent and active open source software in the healthcare area. The purpose of this part is to provide a summary of the available guides and studies relative to the sampled systems, and to identify any gaps in the published literature with respect to our research questions. In the second part, we investigate those alternative systems relative to a set of metrics, by actually installing the software and reporting a hands-on experience of the installation process, usability, as well as other factors. The literature covers many aspects of open source software implementation and utilization in healthcare practice. Roughly, those aspects could be distilled into a basic taxonomy, making the literature landscape more perceivable. Nevertheless, the surveyed articles fall short of fulfilling the targeted objective of providing clear reference to potential implementers. The hands-on study contributed a more detailed comparative guide relative to our set of assessment measures. Overall, no system seems to satisfy an industry-standard measure, particularly in security and interoperability. The systems, as software applications, feel similar from a usability perspective and share a common set of functionality, though they vary considerably in community support and activity. More detailed analysis of popular open source software can benefit the potential implementers of electronic health/medical records systems. The number of examined systems and the measures by which to compare them vary across studies, but still rewarding insights start to emerge. Our work is one step toward that goal. Our overall conclusion is that open source options in the medical field are still far behind the highly acknowledged open source products in other domains, e.g. operating systems market share. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Open Source Library Management Systems: A Multidimensional Evaluation
ERIC Educational Resources Information Center
Balnaves, Edmund
2008-01-01
Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…
Open Source as Appropriate Technology for Global Education
ERIC Educational Resources Information Center
Carmichael, Patrick; Honour, Leslie
2002-01-01
Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing…
Government Technology Acquisition Policy: The Case of Proprietary versus Open Source Software
ERIC Educational Resources Information Center
Hemphill, Thomas A.
2005-01-01
This article begins by explaining the concepts of proprietary and open source software technology, which are now competing in the marketplace. A review of recent individual and cooperative technology development and public policy advocacy efforts, by both proponents of open source software and advocates of proprietary software, subsequently…
Open Source Communities in Technical Writing: Local Exigence, Global Extensibility
ERIC Educational Resources Information Center
Conner, Trey; Gresham, Morgan; McCracken, Jill
2011-01-01
By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…
2015-06-01
ground.aspx?p=1 Texas Tech Security Group, “Automated Open Source Intelligence ( OSINT ) Using APIs.” RaiderSec, Sunday 30 December 2012, http...Open Source Intelligence ( OSINT ) Using APIs,” RaiderSec, Sunday 30 December 2012, http://raidersec.blogspot.com/2012/12/automated-open- source
Open-Source Unionism: New Workers, New Strategies
ERIC Educational Resources Information Center
Schmid, Julie M.
2004-01-01
In "Open-Source Unionism: Beyond Exclusive Collective Bargaining," published in fall 2002 in the journal Working USA, labor scholars Richard B. Freeman and Joel Rogers use the term "open-source unionism" to describe a form of unionization that uses Web technology to organize in hard-to-unionize workplaces. Rather than depend on the traditional…
Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?
ERIC Educational Resources Information Center
van Rooij, Shahron Williams
2007-01-01
This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…
Open Source for Knowledge and Learning Management: Strategies beyond Tools
ERIC Educational Resources Information Center
Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.
2007-01-01
In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies.…
Open-Source Learning Management Systems: A Predictive Model for Higher Education
ERIC Educational Resources Information Center
van Rooij, S. Williams
2012-01-01
The present study investigated the role of pedagogical, technical, and institutional profile factors in an institution of higher education's decision to select an open-source learning management system (LMS). Drawing on the results of previous research that measured patterns of deployment of open-source software (OSS) in US higher education and…
ERIC Educational Resources Information Center
Rodriguez-Sanchez, M. C.; Torrado-Carvajal, Angel; Vaquero, Joaquin; Borromeo, Susana; Hernandez-Tamames, Juan A.
2016-01-01
This paper presents a case study analyzing the advantages and disadvantages of using project-based learning (PBL) combined with collaborative learning (CL) and industry best practices, integrated with information communication technologies, open-source software, and open-source hardware tools, in a specialized microcontroller and embedded systems…
Brokered virtual hubs for facilitating access and use of geospatial Open Data
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo; Latre, Miguel; Kamali, Nargess; Brumana, Raffaella; Braumann, Stefan; Nativi, Stefano
2016-04-01
Open Data is a major trend in current information technology scenario and it is often publicised as one of the pillars of the information society in the near future. In particular, geospatial Open Data have a huge potential also for Earth Sciences, through the enablement of innovative applications and services integrating heterogeneous information. However, open does not mean usable. As it was recognized at the very beginning of the Web revolution, many different degrees of openness exist: from simple sharing in a proprietary format to advanced sharing in standard formats and including semantic information. Therefore, to fully unleash the potential of geospatial Open Data, advanced infrastructures are needed to increase the data openness degree, enhancing their usability. In October 2014, the ENERGIC OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". The ENERGIC OD Virtual Hubs aim to facilitate the use of geospatial Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC OD integrates several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. A first version of the ENERGIC OD brokers has been implemented based on the GI-Suite Brokering Framework developed by CNR-IIA, and complemented with other tools under integration and development. It already enables mediated discovery and harmonized access to different geospatial Open Data sources. It is accessible by users as Software-as-a-Service through a browser. Moreover, open APIs and a Javascript library are available for application developers. Six ENERGIC OD Virtual Hubs have been currently deployed: one at regional level (Berlin metropolitan area) and five at national-level (in France, Germany, Italy, Poland and Spain). Each Virtual Hub manager decided the deployment strategy (local infrastructure or commercial Infrastructure-as-a-Service cloud), and the list of connected Open Data sources. The ENERGIC OD Virtual Hubs are under test and validation through the development of ten different mobile and Web applications.
Coats, Andrew J Stewart; Nijjer, Sukhjinder S; Francis, Darrel P
2014-10-20
Ticagrelor, a potent antiplatelet, has been shown to be beneficial in patients with acute coronary syndromes in a randomised controlled trial published in a highly ranked peer reviewed journal. Accordingly it has entered guidelines and has been approved for clinical use by authorities. However, there remains a controversy regarding aspects of the PLATO trial, which are not immediately apparent from the peer-reviewed publications. A number of publications have sought to highlight potential discrepancies, using data available in publicly published documents from the US Food and Drug Administration (FDA) leading to disagreement regarding the value of open science and data sharing. We reflect upon potential sources of bias present in even rigorously performed randomised controlled trials, on whether peer review can establish the presence of bias and the need to constantly challenge and question even accepted data. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
pyQms enables universal and accurate quantification of mass spectrometry data.
Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian
2017-10-01
Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Technical Reports Server (NTRS)
Cappelli, Daniele; Mansour, Nagi N.
2012-01-01
Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.
E-learning and blended learning in textile engineering education: a closed feedback loop approach
NASA Astrophysics Data System (ADS)
Charitopoulos, A.; Vassiliadis, S.; Rangoussi, M.; Koulouriotis, D.
2017-10-01
E-learning has gained a significant role in typical education and in professional training, thanks to the flexibility it offers to the time and location parameters of the education event framework. Purely e-learning scenarios are mostly limited either to Open University-type higher education institutions or to graduate level or professional degrees; blended learning scenarios are progressively becoming popular thanks to their balanced approach. The aim of the present work is to propose approaches that exploit the e-learning and the blended-learning scenarios for Textile Engineering education programmes, especially for multi-institutional ones. The “E-Team” European MSc degree programme organized by AUTEX is used as a case study. The proposed solution is based on (i) a free and open-source e-learning platform (moodle) and (ii) blended learning educational scenarios. Educational challenges addressed include student engagement, student error / failure handling, as well as collaborative learning promotion and support.
A low-cost programmable pulse generator for physiology and behavior
Sanders, Joshua I.; Kepecs, Adam
2014-01-01
Precisely timed experimental manipulations of the brain and its sensory environment are often employed to reveal principles of brain function. While complex and reliable pulse trains for temporal stimulus control can be generated with commercial instruments, contemporary options remain expensive and proprietary. We have developed Pulse Pal, an open source device that allows users to create and trigger software-defined trains of voltage pulses with high temporal precision. Here we describe Pulse Pal’s circuitry and firmware, and characterize its precision and reliability. In addition, we supply online documentation with instructions for assembling, testing and installing Pulse Pal. While the device can be operated as a stand-alone instrument, we also provide application programming interfaces in several programming languages. As an inexpensive, flexible and open solution for temporal control, we anticipate that Pulse Pal will be used to address a wide range of instrumentation timing challenges in neuroscience research. PMID:25566051
Open source software integrated into data services of Japanese planetary explorations
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.
2015-12-01
Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.
Open source Modeling and optimization tools for Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, S.
Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less
Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging
NASA Technical Reports Server (NTRS)
Horvath, Csaba; Envia, Edmane; Podboy, Gary G.
2013-01-01
Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.
My care pathways - creating open innovation in healthcare.
Lundberg, Nina; Koch, Sabine; Hägglund, Maria; Bolin, Peter; Davoody, Nadia; Eltes, Johan; Jarlman, Olof; Perlich, Anja; Vimarlund, Vivian; Winsnes, Casper
2013-01-01
In this paper we describe initial results from the Swedish innovation project "My Care Pathways" which envisions enabling citizens to track their own health by providing them with online access to their historical, current and prospective future events. We describe an information infrastructure and its base services as well as the use of this solution as an open source platform for open innovation in healthcare. This will facilitate the development of end-user e-services for citizens. We have technically enabled the information infrastructure in close collaboration with decision makers in three Swedish health care regions, and system vendors as well as with National eHealth projects. Close collaboration between heterogeneous actors made implementation in real practice possible. However, a number of challenges, mainly related to legal and business issues, persist when implementing our results. Future work should therefore target the development of business models for sustainable provision of end-user e-services in a public health care system such as the Swedish one. Also, a legal analysis of the development of third party provider (nonhealthcare based) personal health data e-services should be done.
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W.; Ramey, John; Davis, Mark M.; Kalams, Spyros A.; De Rosa, Stephen C.; Gottardo, Raphael
2014-01-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment. PMID:25167361
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W; Ramey, John; Davis, Mark M; Kalams, Spyros A; De Rosa, Stephen C; Gottardo, Raphael
2014-08-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.
Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
Development of an Open Source, Air-Deployable Weather Station
NASA Astrophysics Data System (ADS)
Krejci, A.; Lopez Alcala, J. M.; Nelke, M.; Wagner, J.; Udell, C.; Higgins, C. W.; Selker, J. S.
2017-12-01
We created a packaged weather station intended to be deployed in the air on tethered systems. The device incorporates lightweight sensors and parts and runs for up to 24 hours off of lithium polymer batteries, allowing the entire package to be supported by a thin fiber. As the fiber does not provide a stable platform, additional data (pitch and roll) from typical weather parameters (e.g. temperature, pressure, humidity, wind speed, and wind direction) are determined using an embedded inertial motion unit. All designs are open sourced including electronics, CAD drawings, and descriptions of assembly and can be found on the OPEnS lab website at http://www.open-sensing.org/lowcost-weather-station/. The Openly Published Environmental Sensing Lab (OPEnS: Open-Sensing.org) expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting-edge technology. New OPEnS labs are now being established in India, France, Switzerland, the Netherlands, and Ghana.
Software for Real-Time Analysis of Subsonic Test Shot Accuracy
2014-03-01
used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains
Open source electronic health records and chronic disease management.
Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri
2014-02-01
To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC.
What an open source clinical trial community can learn from hackers
Dunn, Adam G.; Day, Richard O.; Mandl, Kenneth D.; Coiera, Enrico
2014-01-01
Summary Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. Since a similar gap has already been addressed in the software industry by the open source software movement, we examine how the social and technical principles of the movement can be used to guide the growth of an open source clinical trial community. PMID:22553248
2014-01-01
Background Providing scalable clinical decision support (CDS) across institutions that use different electronic health record (EHR) systems has been a challenge for medical informatics researchers. The lack of commonly shared EHR models and terminology bindings has been recognised as a major barrier to sharing CDS content among different organisations. The openEHR Guideline Definition Language (GDL) expresses CDS content based on openEHR archetypes and can support any clinical terminologies or natural languages. Our aim was to explore in an experimental setting the practicability of GDL and its underlying archetype formalism. A further aim was to report on the artefacts produced by this new technological approach in this particular experiment. We modelled and automatically executed compliance checking rules from clinical practice guidelines for acute stroke care. Methods We extracted rules from the European clinical practice guidelines as well as from treatment contraindications for acute stroke care and represented them using GDL. Then we executed the rules retrospectively on 49 mock patient cases to check the cases’ compliance with the guidelines, and manually validated the execution results. We used openEHR archetypes, GDL rules, the openEHR reference information model, reference terminologies and the Data Archetype Definition Language. We utilised the open-sourced GDL Editor for authoring GDL rules, the international archetype repository for reusing archetypes, the open-sourced Ocean Archetype Editor for authoring or modifying archetypes and the CDS Workbench for executing GDL rules on patient data. Results We successfully represented clinical rules about 14 out of 19 contraindications for thrombolysis and other aspects of acute stroke care with 80 GDL rules. These rules are based on 14 reused international archetypes (one of which was modified), 2 newly created archetypes and 51 terminology bindings (to three terminologies). Our manual compliance checks for 49 mock patients were a complete match versus the automated compliance results. Conclusions Shareable guideline knowledge for use in automated retrospective checking of guideline compliance may be achievable using GDL. Whether the same GDL rules can be used for at-the-point-of-care CDS remains unknown. PMID:24886468
NASA Astrophysics Data System (ADS)
Schwietzke, S.; Sherwood, O.; Michel, S. E.; Bruhwiler, L.; Dlugokencky, E. J.; Tans, P. P.
2017-12-01
Methane isotopic data have increasingly been used in recent studies to help constrain global atmospheric methane sources and sinks. The added scientific contributions to this field include (i) careful comparisons and merging of atmospheric isotope measurement datasets to increase spatial coverage, (ii) in-depth analyses of observed isotopic spatial gradients and seasonal patterns, and (iii) improved datasets of isotopic source signatures. Different interpretations have been made regarding the utility of the isotopic data on the diagnosis of methane sources and sinks. Some studies have found isotopic evidence of a largely microbial source causing the renewed growth in global atmospheric methane since 2007, and underestimated global fossil fuel methane emissions compared to most previous studies. However, other studies have challenged these conclusions by pointing out substantial spatial variability in isotopic source signatures as well as open questions in atmospheric sinks and biomass burning trends. This presentation will review and contrast the main arguments and evidence for the different conclusions. The analysis will distinguish among the different research objectives including (i) global methane budget source attribution in steady-state, (ii) source attribution of recent global methane trends, and (iii) identifying specific methane sources in individual plumes during field campaigns. Additional comparisons of model experiments with atmospheric measurements and updates on isotopic source signature data will complement the analysis.
Wonder and the clinical encounter.
Evans, H M
2012-04-01
In terms of intervening in embodied experience, medical treatment is wonder-full in its ambition and its metaphysical presumption; yet, wonder's role in clinical medicine has received little philosophical attention. In this paper, I propose, to doctors and others in routine clinical life, the value of an openness to wonder and to the sense of wonder. Key to this is the identity of the central ethical challenges facing most clinicians, which is not the high-tech drama of the popular conceptions of medical ethics but, rather, the routine of patients' undramatic but unremitting demands for the clinician's time and respectful attention. Wonder (conceived as an intense and transfiguring attentiveness) is a ubiquitous ethical source, an alternative to the more familiar respect for rational autonomy, a source of renewal galvanizing diagnostic imagination, and a timely recalling of the embodied agency of both patient and clinician.
Spiritual well-being in long-term colorectal cancer survivors with ostomies.
Bulkley, Joanna; McMullen, Carmit K; Hornbrook, Mark C; Grant, Marcia; Altschuler, Andrea; Wendel, Christopher S; Krouse, Robert S
2013-11-01
Spiritual well-being (SpWB) is integral to health-related quality of life. The challenges of colorectal cancer (CRC) and subsequent bodily changes can affect SpWB. We analyzed the SpWB of CRC survivors with ostomies. Two-hundred-eighty-three long-term (≥ 5 years) CRC survivors with permanent ostomies completed the modified City of Hope Quality of Life-Ostomy (mCOH-QOL-O) questionnaire. An open-ended question elicited respondents' greatest challenge in living with an ostomy. We used content analysis to identify SpWB responses and develop themes. We analyzed responses on the three-item SpWB sub-scale. Open-ended responses from 52% of participants contained SpWB content. Fifteen unique SpWB themes were identified. Sixty percent of individuals expressed positive themes such as "positive attitude", "I am fortunate", "appreciate life more", and "strength through religious faith". Negative themes, expressed by only 29% of respondents, included "struggling to cope", "not feeling 'normal' ", and "loss". Fifty-five percent of respondents expressed ambivalent themes including "learning acceptance", "an ostomy is the price for survival", "reason to be around despite suffering", and "continuing to cope despite challenges". The majority (64%) had a high SpWB sub-scale score. Although CRC survivors with ostomies infrequently mentioned negative SpWB themes as a major challenge, ambivalent themes were common. SpWB themes were often mentioned as a source of resilience or part of the struggle to adapt to an altered body after cancer surgery. Interventions to improve the quality of life of cancer survivors should contain program elements designed to address SpWB that support personal meaning, inner peace, inter connectedness, and belonging. Copyright © 2013 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Ozdamli, Fezile
2007-01-01
Distance education is becoming more important in the universities and schools. The aim of this research is to evaluate the current existing Open Source Learning Management Systems according to Administration tool and Curriculum Design. For this, seventy two Open Source Learning Management Systems have been subjected to a general evaluation. After…
ERIC Educational Resources Information Center
Samuels, Ruth Gallegos; Griffy, Henry
2012-01-01
This article discusses best practices for evaluating open source software for use in library projects, based on the authors' experience evaluating electronic publishing solutions. First, it presents a brief review of the literature, emphasizing the need to evaluate open source solutions carefully in order to minimize Total Cost of Ownership. Next,…
ERIC Educational Resources Information Center
Vlas, Radu Eduard
2012-01-01
Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…
ERIC Educational Resources Information Center
O'Connor, Eileen A.
2015-01-01
Opening with the history, recent advances, and emerging ways to use avatar-based virtual reality, an instructor who has used virtual environments since 2007 shares how these environments bring more options to community building, teaching, and education. With the open-source movement, where the source code for virtual environments was made…
ERIC Educational Resources Information Center
Wen, Wen
2012-01-01
While open source software (OSS) emphasizes open access to the source code and avoids the use of formal appropriability mechanisms, there has been little understanding of how the existence and exercise of formal intellectual property rights (IPR) such as patents influence the direction of OSS innovation. This dissertation seeks to bridge this gap…
Migrations of the Mind: The Emergence of Open Source Education
ERIC Educational Resources Information Center
Glassman, Michael; Bartholomew, Mitchell; Jones, Travis
2011-01-01
The authors describe an Open Source approach to education. They define Open Source Education (OSE) as a teaching and learning framework where the use and presentation of information is non-hierarchical, malleable, and subject to the needs and contributions of students as they become "co-owners" of the course. The course transforms itself into an…
ERIC Educational Resources Information Center
Waters, John K.
2010-01-01
Open source software is poised to make a profound impact on K-12 education. For years industry experts have been predicting the widespread adoption of open source tools by K-12 school districts. They're about to be proved right. The impact may not yet have been profound, but it's fair to say that some open source systems and non-proprietary…
7 Questions to Ask Open Source Vendors
ERIC Educational Resources Information Center
Raths, David
2012-01-01
With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…
ERIC Educational Resources Information Center
Heric, Matthew; Carter, Jenn
2011-01-01
Cognitive readiness (CR) and performance for operational time-critical environments are continuing points of focus for military and academic communities. In response to this need, we designed an open source interactive CR assessment application as a highly adaptive and efficient open source testing administration and analysis tool. It is capable…
Prahm, Cosima; Eckstein, Korbinian; Ortiz-Catalan, Max; Dorffner, Georg; Kaniusas, Eugenijus; Aszmann, Oskar C
2016-08-31
Controlling a myoelectric prosthesis for upper limbs is increasingly challenging for the user as more electrodes and joints become available. Motion classification based on pattern recognition with a multi-electrode array allows multiple joints to be controlled simultaneously. Previous pattern recognition studies are difficult to compare, because individual research groups use their own data sets. To resolve this shortcoming and to facilitate comparisons, open access data sets were analysed using components of BioPatRec and Netlab pattern recognition models. Performances of the artificial neural networks, linear models, and training program components were compared. Evaluation took place within the BioPatRec environment, a Matlab-based open source platform that provides feature extraction, processing and motion classification algorithms for prosthetic control. The algorithms were applied to myoelectric signals for individual and simultaneous classification of movements, with the aim of finding the best performing algorithm and network model. Evaluation criteria included classification accuracy and training time. Results in both the linear and the artificial neural network models demonstrated that Netlab's implementation using scaled conjugate training algorithm reached significantly higher accuracies than BioPatRec. It is concluded that the best movement classification performance would be achieved through integrating Netlab training algorithms in the BioPatRec environment so that future prosthesis training can be shortened and control made more reliable. Netlab was therefore included into the newest release of BioPatRec (v4.0).
Toward Knowledge Systems for Sustainability Science
NASA Astrophysics Data System (ADS)
Zaks, D. P.; Jahn, M.
2011-12-01
Managing ecosystems for the outcomes of agricultural productivity and resilience will require fundamentally different knowledge management systems. In the industrial paradigm of the 20th century, land was considered an open, unconstrained system managed for maximum yield. While dramatic increases in yield occurred in some crops and locations, unintended but often foreseeable consequences emerged. While productivity remains a key objective, we must develop analytic systems that can identify better management options for the full range of monetized and non-monetized inputs, outputs and outcomes that are captured in the following framing question: How much valued service (e.g. food, materials, energy) can we draw from a landscape while maintaining adequate levels of other valued or necessary services (e.g. biodiversity, water, climate regulation, cultural services) including the long-term productivity of the land? This question is placed within our contemporary framing of valued services, but structured to illuminate the shifts required to achieve long-term sufficiency and planetary resilience. This framing also highlights the need for fundamentally new knowledge systems including information management infrastructures, which effectively support decision-making on landscapes. The purpose of this initiative by authors from diverse fields across government and academic science is to call attention to the need for a vision and investment in sustainability science for landscape management. Substantially enhanced capabilities are needed to compare and integrate information from diverse sources, collected over time that link choices made to meet our needs from landscapes to both short and long term consequences. To further the goal of an information infrastructure for sustainability science, three distinct but interlocking domains are best distinguished: 1) a domain of data, information and knowledge assets; 2) a domain that houses relevant models and tools in a curated space; and 3) a domain that includes decision support tools and systems tailored toward frame particular trade-offs, which may focus on inputs or outputs and may range in scale from local to global. An information infrastructure for sustainability science is best built be built and maintained as a modular, open source, open standard, open access, open content platform. We have defined the scope of this challenge, managing choices within agroecosystems, recognizing that any decision on a landscape involves multidimensional tradeoffs. An effort to address this challenge will need a cohesive, coherent and targeted approach toward an integrated knowledge management infrastructure for sustainability science applied to land management is essential to move more rapidly toward sustainable, productive, and resilient landscapes.
Bambus 2: scaffolding metagenomes.
Koren, Sergey; Treangen, Todd J; Pop, Mihai
2011-11-01
Sequencing projects increasingly target samples from non-clonal sources. In particular, metagenomics has enabled scientists to begin to characterize the structure of microbial communities. The software tools developed for assembling and analyzing sequencing data for clonal organisms are, however, unable to adequately process data derived from non-clonal sources. We present a new scaffolder, Bambus 2, to address some of the challenges encountered when analyzing metagenomes. Our approach relies on a combination of a novel method for detecting genomic repeats and algorithms that analyze assembly graphs to identify biologically meaningful genomic variants. We compare our software to current assemblers using simulated and real data. We demonstrate that the repeat detection algorithms have higher sensitivity than current approaches without sacrificing specificity. In metagenomic datasets, the scaffolder avoids false joins between distantly related organisms while obtaining long-range contiguity. Bambus 2 represents a first step toward automated metagenomic assembly. Bambus 2 is open source and available from http://amos.sf.net. mpop@umiacs.umd.edu. Supplementary data are available at Bioinformatics online.
Bambus 2: scaffolding metagenomes
Koren, Sergey; Treangen, Todd J.; Pop, Mihai
2011-01-01
Motivation: Sequencing projects increasingly target samples from non-clonal sources. In particular, metagenomics has enabled scientists to begin to characterize the structure of microbial communities. The software tools developed for assembling and analyzing sequencing data for clonal organisms are, however, unable to adequately process data derived from non-clonal sources. Results: We present a new scaffolder, Bambus 2, to address some of the challenges encountered when analyzing metagenomes. Our approach relies on a combination of a novel method for detecting genomic repeats and algorithms that analyze assembly graphs to identify biologically meaningful genomic variants. We compare our software to current assemblers using simulated and real data. We demonstrate that the repeat detection algorithms have higher sensitivity than current approaches without sacrificing specificity. In metagenomic datasets, the scaffolder avoids false joins between distantly related organisms while obtaining long-range contiguity. Bambus 2 represents a first step toward automated metagenomic assembly. Availability: Bambus 2 is open source and available from http://amos.sf.net. Contact: mpop@umiacs.umd.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21926123
Open science initiatives: challenges for public health promotion.
Holzmeyer, Cheryl
2018-03-07
While academic open access, open data and open science initiatives have proliferated in recent years, facilitating new research resources for health promotion, open initiatives are not one-size-fits-all. Health research particularly illustrates how open initiatives may serve various interests and ends. Open initiatives not only foster new pathways of research access; they also discipline research in new ways, especially when associated with new regimes of research use and peer review, while participating in innovation ecosystems that often perpetuate existing systemic biases toward commercial biomedicine. Currently, many open initiatives are more oriented toward biomedical research paradigms than paradigms associated with public health promotion, such as social determinants of health research. Moreover, open initiatives too often dovetail with, rather than challenge, neoliberal policy paradigms. Such initiatives are unlikely to transform existing health research landscapes and redress health inequities. In this context, attunement to social determinants of health research and community-based local knowledge is vital to orient open initiatives toward public health promotion and health equity. Such an approach calls for discourses, norms and innovation ecosystems that contest neoliberal policy frameworks and foster upstream interventions to promote health, beyond biomedical paradigms. This analysis highlights challenges and possibilities for leveraging open initiatives on behalf of a wider range of health research stakeholders, while emphasizing public health promotion, health equity and social justice as benchmarks of transformation.
SCALEUS: Semantic Web Services Integration for Biomedical Applications.
Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís
2017-04-01
In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .
Open source IPSEC software in manned and unmanned space missions
NASA Astrophysics Data System (ADS)
Edwards, Jacob
Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.
Upon the Shoulders of Giants: Open-Source Hardware and Software in Analytical Chemistry.
Dryden, Michael D M; Fobel, Ryan; Fobel, Christian; Wheeler, Aaron R
2017-04-18
Isaac Newton famously observed that "if I have seen further it is by standing on the shoulders of giants." We propose that this sentiment is a powerful motivation for the "open-source" movement in scientific research, in which creators provide everything needed to replicate a given project online, as well as providing explicit permission for users to use, improve, and share it with others. Here, we write to introduce analytical chemists who are new to the open-source movement to best practices and concepts in this area and to survey the state of open-source research in analytical chemistry. We conclude by considering two examples of open-source projects from our own research group, with the hope that a description of the process, motivations, and results will provide a convincing argument about the benefits that this movement brings to both creators and users.
Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.
Zhang, C; Wijnen, B; Pearce, J M
2016-08-01
The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Pinner, J. W., IV
2016-02-01
Data from shipboard oceanographic sensors come in various formats and collection typically requires multiple data acquisition software packages running on multiple workstations throughout the vessel. Technicians must then corral all or a subset of the resulting data files so that they may be used by shipboard scientists. On many vessels the process of corralling files into a single cruise data package may change from cruise to cruise or even from technician to technician. It is these inconsistencies in the final cruise data packages that pose the greatest challenge when attempting to automate the process of cataloging cruise data for submission to data archives. A second challenge with the management of shipboard data is ensuring it's quality. Problems with sensors may go unnoticed simply because the technician/scientist was unaware the data from a sensor was absent, invalid, or out of range. The Open Vessel Data Management project (OpenVDM) is a ship-wide data management solution developed to address these issues. In the past three years OpenVDM has successfully demonstrated it's ability to adapt to the needs of vessels with different capabilities/missions while delivering a consistent cruise data package to scientists and adhering to the recommendations and best practices set forth by 3rd party data management groups such as R2R. In the last year OpenVDM has implemented a plugin architecture for monitoring data quality. This allowed vessel operators to develop custom data quality tests tailored to their vessel's unique raw datasets. Data quality test are performed in near-real-time and the results are readily available within a web-interface. This plugin architecture allows 3rd party data quality workgroups like SAMOS to migrate their data quality tests to the vessel and provide immediate determination of data quality. OpenVDM is currently operating aboard three vessels. The R/V Endeavor, operated by the University of Rhode Island, is a regional-class UNOLS research vessel operating under the traditional NFS, P.I. driven model. The E/V Nautilus, operated by the Ocean Exploration Trust specializes in ROV-based, telepresence-enabled oceanographic research. The R/V Falkor operated by the Schmidt Ocean Institute is an ocean research platform focusing on cutting-edge technology development.
OpenSesame: an open-source, graphical experiment builder for the social sciences.
Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan
2012-06-01
In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.
Building a Snow Data System on the Apache OODT Open Technology Stack
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Painter, T. H.; Mattmann, C. A.; Hart, A. F.; Ramirez, P.; Zimdars, P.; Bryant, A. C.; Snow Data System Team
2011-12-01
Snow cover and its melt dominate regional climate and hydrology in many of the world's mountainous regions. One-sixth of Earth's population depends on snow- or glacier-melt for water resources. Operationally, seasonal forecasts of snowmelt-generated streamflow are leveraged through empirical relations based on past snowmelt periods. These historical data show that climate is changing, but the changes reduce the reliability of the empirical relations. Therefore optimal future management of snowmelt derived water resources will require explicit physical models driven by remotely sensed snow property data. Toward this goal, the Snow Optics Laboratory at the Jet Propulsion Laboratory has initiated a near real-time processing pipeline to generate and publish post-processed snow data products within a few hours of satellite acquisition. To solve this challenge, a Scientific Data Management and Processing System was required and the JPL Team leveraged an open-source project called Object Oriented Data Technology (OODT). OODT was developed within NASA's Jet Propulsion Laboratory across the last 10 years. OODT has supported various scientific data management and processing projects, providing solutions in the Earth, Planetary, and Medical science fields. It became apparent that the project needed to be opened to a larger audience to foster and promote growth and adoption. OODT was open-sourced at the Apache Software Foundation in November 2010 and has a growing community of users and committers that are constantly improving the software. Leveraging OODT, the JPL Snow Data System (SnowDS) Team was able to install and configure a core Data Management System (DMS) that would download MODIS raw data files and archive the products in a local repository for post processing. The team has since built an online data portal, and an algorithm-processing pipeline using the Apache OODT software as the foundation. We will present the working SnowDS system with its core remote sensing components: the MODIS Snow Covered Area and Grain size model (MODSCAG) and the MODIS Dust Radiative Forcing in Snow (MOD-DRFS). These products will be delivered in near real time to water managers and the broader cryosphere and climate community beginning in Winter 2012. We will then present the challenges and opportunities we see in the future as the SnowDS matures and contributions are made back to the OODT project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
Simulation of partially coherent light propagation using parallel computing devices
NASA Astrophysics Data System (ADS)
Magalhães, Tiago C.; Rebordão, José M.
2017-08-01
Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.
An Open Source Simulation System
NASA Technical Reports Server (NTRS)
Slack, Thomas
2005-01-01
An investigation into the current state of the art of open source real time programming practices. This document includes what technologies are available, how easy is it to obtain, configure, and use them, and some performance measures done on the different systems. A matrix of vendors and their products is included as part of this investigation, but this is not an exhaustive list, and represents only a snapshot of time in a field that is changing rapidly. Specifically, there are three approaches investigated: 1. Completely open source on generic hardware, downloaded from the net. 2. Open source packaged by a vender and provided as free evaluation copy. 3. Proprietary hardware with pre-loaded proprietary source available software provided by the vender as for our evaluation.
Development of an information retrieval tool for biomedical patents.
Alves, Tiago; Rodrigues, Rúben; Costa, Hugo; Rocha, Miguel
2018-06-01
The volume of biomedical literature has been increasing in the last years. Patent documents have also followed this trend, being important sources of biomedical knowledge, technical details and curated data, which are put together along the granting process. The field of Biomedical text mining (BioTM) has been creating solutions for the problems posed by the unstructured nature of natural language, which makes the search of information a challenging task. Several BioTM techniques can be applied to patents. From those, Information Retrieval (IR) includes processes where relevant data are obtained from collections of documents. In this work, the main goal was to build a patent pipeline addressing IR tasks over patent repositories to make these documents amenable to BioTM tasks. The pipeline was developed within @Note2, an open-source computational framework for BioTM, adding a number of modules to the core libraries, including patent metadata and full text retrieval, PDF to text conversion and optical character recognition. Also, user interfaces were developed for the main operations materialized in a new @Note2 plug-in. The integration of these tools in @Note2 opens opportunities to run BioTM tools over patent texts, including tasks from Information Extraction, such as Named Entity Recognition or Relation Extraction. We demonstrated the pipeline's main functions with a case study, using an available benchmark dataset from BioCreative challenges. Also, we show the use of the plug-in with a user query related to the production of vanillin. This work makes available all the relevant content from patents to the scientific community, decreasing drastically the time required for this task, and provides graphical interfaces to ease the use of these tools. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Godwin, Stephen; McAndrew, Patrick; Santos, Andreia
2008-01-01
Web-enabled technology is now being applied on a large scale. In this paper we look at open access provision of teaching and learning leading to many users with varying patterns and motivations for use. This has provided us with a research challenge to find methods that help us understand and explain such initiatives. We describe ways to model the…
A clinic compatible, open source electrophysiology system.
Hermiz, John; Rogers, Nick; Kaestner, Erik; Ganji, Mehran; Cleary, Dan; Snider, Joseph; Barba, David; Dayeh, Shadi; Halgren, Eric; Gilja, Vikash
2016-08-01
Open source electrophysiology (ephys) recording systems have several advantages over commercial systems such as customization and affordability enabling more researchers to conduct ephys experiments. Notable open source ephys systems include Open-Ephys, NeuroRighter and more recently Willow, all of which have high channel count (64+), scalability, and advanced software to develop on top of. However, little work has been done to build an open source ephys system that is clinic compatible, particularly in the operating room where acute human electrocorticography (ECoG) research is performed. We developed an affordable (<; $10,000) and open system for research purposes that features power isolation for patient safety, compact and water resistant enclosures and 256 recording channels sampled up to 20ksam/sec, 16-bit. The system was validated by recording ECoG with a high density, thin film device for an acute, awake craniotomy study at UC San Diego, Thornton Hospital Operating Room.
Freeing Worldview's development process: Open source everything!
NASA Astrophysics Data System (ADS)
Gunnoe, T.
2016-12-01
Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.
OpenFLUID: an open-source software environment for modelling fluxes in landscapes
NASA Astrophysics Data System (ADS)
Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc
2013-04-01
Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org
Interim Open Source Software (OSS) Policy
This interim Policy establishes a framework to implement the requirements of the Office of Management and Budget's (OMB) Federal Source Code Policy to achieve efficiency, transparency and innovation through reusable and open source software.
NASA Astrophysics Data System (ADS)
Cannata, Massimiliano; Ratnayake, Rangajeewa; Antonovic, Milan; Strigaro, Daniele
2017-04-01
Environmental monitoring systems in low economies countries are often in decline, outdated or missing with the consequence that there is a very scarce availability and accessibility to these information that are vital for coping and mitigating natural hazards. Non-conventional monitoring systems based on open technologies may constitute a viable solution to create low cost and sustainable monitoring systems that may be fully developed, deployed and maintained at local level without lock-in dependances on copyrights or patents or high costs of replacements. The 4onse research project , funded under the Research for Development program of the Swiss National Science Foundation and the Swiss Office for Development and Cooperation, propose a complete monitoring system that integrates Free & Open Source Software, Open Hardware, Open Data, and Open Standards. After its engineering, it will be tested in the Deduru Oya catchment (Sri Lanka) to evaluate the system and develop a water management information system to optimize the regulation of artificial basins levels and mitigate flash floods. One of the objective is to better scientifically understand strengths, criticalities and applicabilities in terms of data quality; system durability; management costs; performances; sustainability. Results, challenges and experiences from the first six months of the projects will be presented with particular focus on the activities of synergies building and data collection and dissemination system advances.
Open Source Molecular Modeling
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-01-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126
ERIC Educational Resources Information Center
Long, Ju
2009-01-01
Open Source Software (OSS) is a major force in today's Information Technology (IT) landscape. Companies are increasingly using OSS in mission-critical applications. The transparency of the OSS technology itself with openly available source codes makes it ideal for students to participate in the OSS project development. OSS can provide unique…
Open Source Initiative Powers Real-Time Data Streams
NASA Technical Reports Server (NTRS)
2014-01-01
Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.
ERIC Educational Resources Information Center
Dunlap, Joanna C.; Wilson, Brent G.; Young, David L.
This paper describes how Open Source philosophy, a movement that has developed in opposition to the proprietary software industry, has influenced educational practice in the pursuit of scholarly freedom and authentic learning activities for students and educators. This paper provides a brief overview of the Open Source movement, and describes…
ERIC Educational Resources Information Center
van Rooij, Shahron Williams
2009-01-01
Higher Education institutions in the United States are considering Open Source software applications such as the Moodle and Sakai course management systems and the Kuali financial system to build integrated learning environments that serve both academic and administrative needs. Open Source is presumed to be more flexible and less costly than…
ERIC Educational Resources Information Center
Daniels, Daniel B., III
2014-01-01
There is a lack of literature linking end-user behavior to the availability of open-source intelligence (OSINT). Most OSINT literature has been focused on the use and assessment of open-source intelligence, not the proliferation of personally or organizationally identifiable information (PII/OII). Additionally, information security studies have…
Looking toward the Future: A Case Study of Open Source Software in the Humanities
ERIC Educational Resources Information Center
Quamen, Harvey
2006-01-01
In this article Harvey Quamen examines how the philosophy of open source software might be of particular benefit to humanities scholars in the near future--particularly for academic journals with limited financial resources. To this end he provides a case study in which he describes his use of open source technology (MySQL database software and…
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A
2016-07-15
While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans
2016-01-01
Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686
Preparing a scientific manuscript in Linux: Today's possibilities and limitations.
Tchantchaleishvili, Vakhtang; Schmitto, Jan D
2011-10-22
Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
Exploring the Role of Value Networks for Software Innovation
NASA Astrophysics Data System (ADS)
Morgan, Lorraine; Conboy, Kieran
This paper describes a research-in-progress that aims to explore the applicability and implications of open innovation practices in two firms - one that employs agile development methods and another that utilizes open source software. The open innovation paradigm has a lot in common with open source and agile development methodologies. A particular strength of agile approaches is that they move away from 'introverted' development, involving only the development personnel, and intimately involves the customer in all areas of software creation, supposedly leading to the development of a more innovative and hence more valuable information system. Open source software (OSS) development also shares two key elements of the open innovation model, namely the collaborative development of the technology and shared rights to the use of the technology. However, one shortfall with agile development in particular is the narrow focus on a single customer representative. In response to this, we argue that current thinking regarding innovation needs to be extended to include multiple stakeholders both across and outside the organization. Additionally, for firms utilizing open source, it has been found that their position in a network of potential complementors determines the amount of superior value they create for their customers. Thus, this paper aims to get a better understanding of the applicability and implications of open innovation practices in firms that employ open source and agile development methodologies. In particular, a conceptual framework is derived for further testing.
Design and Deployment of a General Purpose, Open Source LoRa to Wi-Fi Hub and Data Logger
NASA Astrophysics Data System (ADS)
DeBell, T. C.; Udell, C.; Kwon, M.; Selker, J. S.; Lopez Alcala, J. M.
2017-12-01
Methods and technologies facilitating internet connectivity and near-real-time status updates for in site environmental sensor data are of increasing interest in Earth Science. However, Open Source, Do-It-Yourself technologies that enable plug and play functionality for web-connected sensors and devices remain largely inaccessible for typical researchers in our community. The Openly Published Environmental Sensing Lab at Oregon State University (OPEnS Lab) constructed an Open Source 900 MHz Long Range Radio (LoRa) receiver hub with SD card data logger, Ethernet and Wi-Fi shield, and 3D printed enclosure that dynamically uploads transmissions from multiple wirelessly-connected environmental sensing devices. Data transmissions may be received from devices up to 20km away. The hub time-stamps, saves to SD card, and uploads all transmissions to a Google Drive spreadsheet to be accessed in near-real-time by researchers and GeoVisualization applications (such as Arc GIS) for access, visualization, and analysis. This research expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting edge technology. This poster details our methods and evaluates the application of using 3D printing, Arduino Integrated Development Environment (IDE), Adafruit's Open-Hardware Feather development boards, and the WIZNET5500 Ethernet shield for designing this open-source, general purpose LoRa to Wi-Fi data logger.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark Schanfein
2013-06-01
Undeclared nuclear facilities unequivocally remain the most difficult safeguards challenge facing the International Atomic Energy Agency (IAEA). Recent cases of undeclared facilities revealed in Iran and Syria, which are NPT signatory States, show both the difficulty and the seriousness of this threat to nonproliferation. In the case of undeclared nuclear facilities, the most effective deterrent against proliferation is the application of Wide-Area Environmental Sampling (WAES); however, WAES is currently cost-prohibitive. As with any threat, the most effective countering strategy is a multifaceted approach. Some of the approaches applied by the IAEA include: open source analysis, satellite imagery, on-site environmental sampling,more » complementary access under the Additional Protocol (where in force), traditional safeguards inspections, and information provided by member States. These approaches, naturally, are focused on specific States. Are there other opportunities not currently within the IAEA purview to assess States that may provide another opportunity to detect clandestine facilities? In this paper, the author will make the case that the IAEA Department of Safeguards should explore the area of worldwide marine radioactivity studies as one possible opportunity. One such study was released by the IAEA Marine Environment Laboratory in January 2005. This technical document focused on 90Sr, 137Cs, and 239/240Pu. It is clearly a challenging area because of the many sources of anthropogenic radionuclides in the world’s oceans and seas including: nuclear weapons testing, reprocessing, accidents, waste dumping, and industrial and medical radioisotopes, whose distributions change based on oceanographic, geochemical, and biological processes, and their sources. It is additionally challenging where multiple States share oceans, seas, and rivers. But with the application of modern science, historical sampling to establish baselines, and a focus on the most relevant radionuclides, the potential is there to support this challenging IAEA safeguards mission.« less
Marcelo, A; Adejumo, A; Luna, D
2011-01-01
Describe the issues surrounding health informatics in developing countries and the challenges faced by practitioners in building internal capacity. From these issues, the authors propose cost-effective strategies that can fast track health informatics development in these low to medium income countries (LMICs). The authors conducted a review of literature and consulted key opinion leaders who have experience with health informatics implementations around the world. Despite geographic and cultural differences, many LMICs share similar challenges and opportunities in developing health informatics. Partnerships, standards, and inter-operability are well known components of successful informatics programs. Establishing partnerships can be comprised of formal inter-institutional collaborations on training and research, collaborative open source software development, and effective use of social networking. Lacking legacy systems, LMICs can discuss standards and inter-operability more openly and have greater potential for success. Lastly, since cellphones are pervasive in developing countries, they can be leveraged as access points for delivering and documenting health services in remote under-served areas. Mobile health or mHealth gives LMICs a unique opportunity to leapfrog through most issues that have plagued health informatics in developed countries. By employing this proposed roadmap, LMICs can now develop capacity for health informatics using appropriate and cost-effective technologies.
NASA Astrophysics Data System (ADS)
Santos, Olga C.; Saneiro, Mar; Boticario, Jesus G.; Rodriguez-Sanchez, M. C.
2016-01-01
This work explores the benefits of supporting learners affectively in a context-aware learning situation. This features a new challenge in related literature in terms of providing affective educational recommendations that take advantage of ambient intelligence and are delivered through actuators available in the environment, thus going beyond previous approaches which provided computer-based recommendation that present some text or tell aloud the learner what to do. To address this open issue, we have applied TORMES elicitation methodology, which has been used to investigate the potential of ambient intelligence for making more interactive recommendations in an emotionally challenging scenario (i.e. preparing for the oral examination of a second language learning course). Arduino open source electronics prototyping platform is used both to sense changes in the learners' affective state and to deliver the recommendation in a more interactive way through different complementary sensory communication channels (sight, hearing, touch) to cope with a universal design. An Ambient Intelligence Context-aware Affective Recommender Platform (AICARP) has been built to support the whole experience, which represents a progress in the state of the art. In particular, we have come up with what is most likely the first interactive context-aware affective educational recommendation. The value of this contribution lies in discussing methodological and practical issues involved.
Leider, Jonathon P; Resnick, Beth; Bishai, David; Scutchfield, F Douglas
2018-04-01
The United States has a complex governmental public health system. Agencies at the federal, state, and local levels all contribute to the protection and promotion of the population's health. Whether the modern public health system is well situated to deliver essential public health services, however, is an open question. In some part, its readiness relates to how agencies are funded and to what ends. A mix of Federalism, home rule, and happenstance has contributed to a siloed funding system in the United States, whereby health agencies are given particular dollars for particular tasks. Little discretionary funding remains. Furthermore, tracking how much is spent, by whom, and on what is notoriously challenging. This review both outlines the challenges associated with estimating public health spending and explains the known sources of funding that are used to estimate and demonstrate the value of public health spending.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Chen, Yuanfang; Lee, Gyu Myoung; Shu, Lei; Crespi, Noel
2016-02-06
The development of an efficient and cost-effective solution to solve a complex problem (e.g., dynamic detection of toxic gases) is an important research issue in the industrial applications of the Internet of Things (IoT). An industrial intelligent ecosystem enables the collection of massive data from the various devices (e.g., sensor-embedded wireless devices) dynamically collaborating with humans. Effectively collaborative analytics based on the collected massive data from humans and devices is quite essential to improve the efficiency of industrial production/service. In this study, we propose a collaborative sensing intelligence (CSI) framework, combining collaborative intelligence and industrial sensing intelligence. The proposed CSI facilitates the cooperativity of analytics with integrating massive spatio-temporal data from different sources and time points. To deploy the CSI for achieving intelligent and efficient industrial production/service, the key challenges and open issues are discussed, as well.
Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D
2018-05-01
Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.
Chen, Yuanfang; Lee, Gyu Myoung; Shu, Lei; Crespi, Noel
2016-01-01
The development of an efficient and cost-effective solution to solve a complex problem (e.g., dynamic detection of toxic gases) is an important research issue in the industrial applications of the Internet of Things (IoT). An industrial intelligent ecosystem enables the collection of massive data from the various devices (e.g., sensor-embedded wireless devices) dynamically collaborating with humans. Effectively collaborative analytics based on the collected massive data from humans and devices is quite essential to improve the efficiency of industrial production/service. In this study, we propose a collaborative sensing intelligence (CSI) framework, combining collaborative intelligence and industrial sensing intelligence. The proposed CSI facilitates the cooperativity of analytics with integrating massive spatio-temporal data from different sources and time points. To deploy the CSI for achieving intelligent and efficient industrial production/service, the key challenges and open issues are discussed, as well. PMID:26861345
Virtual Screening with AutoDock: Theory and Practice
Cosconati, Sandro; Forli, Stefano; Perryman, Alex L.; Harris, Rodney; Goodsell, David S.; Olson, Arthur J.
2011-01-01
Importance to the field Virtual screening is a computer-based technique for identifying promising compounds to bind to a target molecule of known structure. Given the rapidly increasing number of protein and nucleic acid structures, virtual screening continues to grow as an effective method for the discovery of new inhibitors and drug molecules. Areas covered in this review We describe virtual screening methods that are available in the AutoDock suite of programs, and several of our successes in using AutoDock virtual screening in pharmaceutical lead discovery. What the reader will gain A general overview of the challenges of virtual screening is presented, along with the tools available in the AutoDock suite of programs for addressing these challenges. Take home message Virtual screening is an effective tool for the discovery of compounds for use as leads in drug discovery, and the free, open source program AutoDock is an effective tool for virtual screening. PMID:21532931