Sample records for big science projects

  1. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  2. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  3. Big physics quartet win government backing

    NASA Astrophysics Data System (ADS)

    Banks, Michael

    2014-09-01

    Four major physics-based projects are among 10 to have been selected by Japan’s Ministry of Education, Culture, Sports, Science and Technology for funding in the coming decade as part of its “roadmap” of big-science projects.

  4. The Human Genome Project: big science transforms biology and medicine.

    PubMed

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called 'big science' - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project.

  5. Big Data Science Education: A Case Study of a Project-Focused Introductory Course

    ERIC Educational Resources Information Center

    Saltz, Jeffrey; Heckman, Robert

    2015-01-01

    This paper reports on a case study of a project-focused introduction to big data science course. The pedagogy of the course leveraged boundary theory, where students were positioned to be at the boundary between a client's desire to understand their data and the academic class. The results of the case study demonstrate that using live clients…

  6. Detection and Characterisation of Meteors as a Big Data Citizen Science project

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.

    2017-12-01

    Out of a total around 50,000 meteorites currently known to science, the atmospheric passage was recorded instrumentally in only 30 cases with the potential to derive their atmospheric trajectories and pre-impact heliocentric orbits. Similarly, while the observations of meteors, add thousands of new entries per month to existing databases, it is extremely rare they lead to meteorite recovery. Meteor studies thus represent an excellent example of the Big Data citizen science project, where progress in the field largely depends on the prompt identification and characterisation of meteor events as well as on extensive and valuable contributions by amateur observers. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently established EU COST BigSkyEarth http://bigskyearth.eu/ network.

  7. Big questions, big science: meeting the challenges of global ecology.

    PubMed

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  8. Big Science! Big Problems?

    ERIC Educational Resources Information Center

    Beigel, Allan

    1991-01-01

    Lessons learned by the University of Arizona through participation in two major scientific projects, construction of an astronomical observatory and a super cyclotron, are discussed. Four criteria for institutional participation in such projects are outlined, including consistency with institutional mission, adequate resources, leadership, and…

  9. Meteor Observations as Big Data Citizen Science

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.; Vinkovic, D.; Schwarz, G.; Nina, A.; Koschny, D.; Lyytinen, E.

    2016-12-01

    Meteor science represents an excellent example of the citizen science project, where progress in the field has been largely determined by amateur observations. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently established BigSkyEarth http://bigskyearth.eu/ network.

  10. The Human Genome Project: big science transforms biology and medicine

    PubMed Central

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called ‘big science’ - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project. PMID:24040834

  11. A Demonstration of Big Data Technology for Data Intensive Earth Science (Invited)

    NASA Astrophysics Data System (ADS)

    Kuo, K.; Clune, T.; Ramachandran, R.; Rushing, J.; Fekete, G.; Lin, A.; Doan, K.; Oloso, A. O.; Duffy, D.

    2013-12-01

    Big Data technologies exhibit great potential to change the way we conduct scientific investigations, especially analysis of voluminous and diverse data sets. Obviously, not all Big Data technologies are applicable to all aspects of scientific data analysis. Our NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) project, Automated Event Service (AES), pioneers the exploration of Big Data technologies for data intensive Earth science. Since Earth science data are largely stored and manipulated in the form of multidimensional arrays, the project first evaluates array performance of several candidate Big Data technologies, including MapReduce (Hadoop), SciDB, and a custom-built Polaris system, which have one important feature in common: shared nothing architecture. The evaluation finds SicDB to be the most promising. In this presentation, we demonstrate SciDB using a couple of use cases, each operating on a distinct data set in the regular latitude-longitude grid. The first use case is the discovery and identification of blizzards using NASA's Modern Era Retrospective-analysis for Research and Application (MERRA) data sets. The other finds diurnal signals in the same 8-year period using SSMI data from three different instruments with different equator crossing times by correlating their retrieved parameters. In addition, the AES project is also developing a collaborative component to enable the sharing of event queries and results. Preliminary capabilities will be presented as well.

  12. The New Big Science: What's New, What's Not, and What's the Difference

    NASA Astrophysics Data System (ADS)

    Westfall, Catherine

    2016-03-01

    This talk will start with a brief recap of the development of the ``Big Science'' epitomized by high energy physics, that is, the science that flourished after WWII based on accelerators, teams, and price tags that grew ever larger. I will then explain the transformation that started in the 1980s and culminated in the 1990s when the Cold War ended and the next big machine needed to advance high energy physics, the multi-billion dollar Superconducting Supercollider (SSC), was cancelled. I will go on to outline the curious series of events that ushered in the New Big Science, a form of research well suited to a post-Cold War environment that valued practical rather than esoteric projects. To show the impact of the New Big Science I will describe how decisions were ``set into concrete'' during the development of experimental equipment at the Thomas Jefferson National Accelerator Facility in Newport News, Virginia.

  13. NASA's Big Data Task Force

    NASA Astrophysics Data System (ADS)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  14. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single metadata catalog. The entire BiG CZ Software system is being developed on public repositories as a modular suite of open source software projects. It will be built around a new Observations Data Model Version 2.0 (ODM2) that has been developed by members of the BiG CZ project team, with community input, under separate funding.

  15. Big Biology: Supersizing Science During the Emergence of the 21st Century

    PubMed Central

    Vermeulen, Niki

    2017-01-01

    Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209

  16. Bigfoot Field Manual

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Burrows, S.; Gower, S. T.; Cohen, W. B.

    1999-09-01

    The BigFoot Project is funded by the Earth Science Enterprise to collect and organize data to be used in the EOS Validation Program. The data collected by the BigFoot Project are unique in being ground-based observations coincident with satellite overpasses. In addition to collecting data, the BigFoot project will develop and test new algorithms for scaling point measurements to the same spatial scales as the EOS satellite products. This BigFoot Field Manual Mill be used to achieve completeness and consistency of data collected at four initial BigFoot sites and at future sites that may collect similar validation data. Therefore, validation datasets submitted to the ORNL DAAC that have been compiled in a manner consistent with the field manual will be especially valuable in the validation program.

  17. Reviews Book: Extended Project Student Guide Book: My Inventions Book: ASE Guide to Research in Science Education Classroom Video: The Science of Starlight Software: SPARKvue Book: The Geek Manifesto Ebook: A Big Ball of Fire Apps

    NASA Astrophysics Data System (ADS)

    2014-05-01

    WE RECOMMEND Level 3 Extended Project Student Guide A non-specialist, generally useful and nicely put together guide to project work ASE Guide to Research in Science Education Few words wasted in this handy introduction and reference The Science of Starlight Slow but steady DVD covers useful ground SPARKvue Impressive software now available as an app WORTH A LOOK My Inventions and Other Writings Science, engineering, autobiography, visions and psychic phenomena mixed in a strange but revealing concoction The Geek Manifesto: Why Science Matters More enthusiasm than science, but a good motivator and interesting A Big Ball of Fire: Your questions about the Sun answered Free iTunes download made by and for students goes down well APPS Collider visualises LHC experiments ... Science Museum app enhances school trips ... useful information for the Cambridge Science Festival

  18. Rethinking Big Science. Modest, mezzo, grand science and the development of the Bevalac, 1971-1993.

    PubMed

    Westfall, Catherine

    2003-03-01

    Historians of science have tended to focus exclusively on scale in investigations of largescale research, perhaps because it has been easy to assume that comprehending a phenomenon dubbed "Big Science" hinges on an understanding of bigness. A close look at Lawrence Berkeley Laboratory's Bevalac, a medium-scale "mezzo science" project formed by uniting two preexisting machines--the modest SuperHILAC and the grand Bevatron--shows what can be gained by overcoming this preoccupation with bigness. The Bevalac story reveals how interconnections, connections, and disconnections ultimately led to the development of a new kind of science that transformed the landscape of large-scale research in the United States. Important lessons in historiography also emerge: the value of framing discussions in terms of networks, the necessity of constantly expanding and refining methodology, and the importance of avoiding the rhetoric of participants and instead finding words to tell our own stories.

  19. Bringing the Tools of Big Science to Bear on Local Environmental Challenges

    ERIC Educational Resources Information Center

    Bronson, Scott; Jones, Keith W.; Brown, Maria

    2013-01-01

    We describe an interactive collaborative environmental education project that makes advanced laboratory facilities at Brookhaven National Laboratory accessible for one-year or multi-year science projects for the high school level. Cyber-enabled Environmental Science (CEES) utilizes web conferencing software to bring multi-disciplinary,…

  20. Big Outcrops and Big Ideas in Earth Science K-8 Professional Development

    NASA Astrophysics Data System (ADS)

    Baldwin, K. A.; Cooper, C. M.; Cavagnetto, A.; Morrison, J.; Adesope, O.

    2014-12-01

    Washington State has recently adopted the Next Generation Science Standards (NGSS) and state leaders are now working toward supporting teachers' implementation of the new standards and the pedagogical practices that support them. This poster encompasses one of one such professional development (PD) effort. The Enhancing Understanding of Concepts and Processes of Science (EUCAPS) project serves 31 K-8 in-service teachers in two southeast Washington school districts. In year two of this three year PD project, in-service teachers explored the Earth sciences and pedagogical approaches such as the Science Writing Heuristic, concept mapping, and activities which emphasized the epistemic nature of science. The goals of the EUCAPS PD project are to increase in-service teachers' big ideas in science and to provide support to in-service teachers as they transition to the NGSS. Teachers used concepts maps to document their knowledge of Earth science processes before and after visiting a local field site in Lewiston, Idaho. In the context of immersive inquiries, teachers collected field-based evidence to support their claims about the geological history of the field site. Teachers presented their claims and evidence to their peers in the form a story about the local geologic history. This poster will present an overview of the PD as well as provide examples of teacher's work and alignment with the NGSS.

  1. The New Big Science at the NSLS

    NASA Astrophysics Data System (ADS)

    Crease, Robert

    2016-03-01

    The term ``New Big Science'' refers to a phase shift in the kind of large-scale science that was carried out throughout the U.S. National Laboratory system, when large-scale materials science accelerators rather than high-energy physics accelerators became marquee projects at most major basic research laboratories in the post-Cold War era, accompanied by important changes in the character and culture of the research ecosystem at these laboratories. This talk explores some aspects of this phase shift at BNL's National Synchrotron Light Source.

  2. Managing a big ground-based astronomy project: the Thirty Meter Telescope (TMT) project

    NASA Astrophysics Data System (ADS)

    Sanders, Gary H.

    2008-07-01

    TMT is a big science project and its scale is greater than previous ground-based optical/infrared telescope projects. This paper will describe the ideal "linear" project and how the TMT project departs from that ideal. The paper will describe the needed adaptations to successfully manage real world complexities. The progression from science requirements to a reference design, the development of a product-oriented Work Breakdown Structure (WBS) and an organization that parallels the WBS, the implementation of system engineering, requirements definition and the progression through Conceptual Design to Preliminary Design will be summarized. The development of a detailed cost estimate structured by the WBS, and the methodology of risk analysis to estimate contingency fund requirements will be summarized. Designing the project schedule defines the construction plan and, together with the cost model, provides the basis for executing the project guided by an earned value performance measurement system.

  3. Bigfoot Field Manual, Version 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, J.L.; Burrows, S.; Gower, S.T.

    1999-09-01

    The BigFoot Project is funded by the Earth Science Enterprise to collect and organize data to be used in the National Aeronautics and Space Administration's Earth Observing System (EOS) Validation Program. The data collected by the BigFoot Project are unique in being ground-based observations coincident with satellite overpasses. In addition to collecting data, the BigFoot project will develop and test new algorithms for scaling point measurements to the same spatial scales as the EOS satellite products. This BigFoot Field Manual will be used to achieve completeness and consistency of data collected at four initial BigFoot sites and at future sitesmore » that may collect similar validation data. Therefore, validation datasets submitted to the Oak Ridge National Laboratory Distributed Active Archive Center that have been compiled in a manner consistent with the field manual will be especially valuable in the validation program.« less

  4. Big Data and Data Science in Critical Care.

    PubMed

    Sanchez-Pinto, L Nelson; Luo, Yuan; Churpek, Matthew M

    2018-05-09

    The digitalization of the health-care system has resulted in a deluge of clinical Big Data and has prompted the rapid growth of data science in medicine. Data science, which is the field of study dedicated to the principled extraction of knowledge from complex data, is particularly relevant in the critical care setting. The availability of large amounts of data in the ICU, the need for better evidence-based care, and the complexity of critical illness makes the use of data science techniques and data-driven research particularly appealing to intensivists. Despite the increasing number of studies and publications in the field, thus far there have been few examples of data science projects that have resulted in successful implementations of data-driven systems in the ICU. However, given the expected growth in the field, intensivists should be familiar with the opportunities and challenges of Big Data and data science. The present article reviews the definitions, types of algorithms, applications, challenges, and future of Big Data and data science in critical care. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  5. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    NASA Astrophysics Data System (ADS)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT Science Cloud for Earth and space science. In 2003 56 refereed papers were published. At the end, we introduce a couple of successful results of Earth and space sciences using these three techniques carried out on the NICT Sciences Cloud. [1] http://sc-web.nict.go.jp

  6. Mount Sharp Inside Gale Crater, Mars

    NASA Image and Video Library

    2012-03-28

    Curiosity, the big rover of NASA Mars Science Laboratory mission, will land in August 2012 near the foot of a mountain inside Gale Crater. The mission project science group is calling the mountain Mount Sharp.

  7. Taking a 'Big Data' approach to data quality in a citizen science project.

    PubMed

    Kelling, Steve; Fink, Daniel; La Sorte, Frank A; Johnston, Alison; Bruns, Nicholas E; Hochachka, Wesley M

    2015-11-01

    Data from well-designed experiments provide the strongest evidence of causation in biodiversity studies. However, for many species the collection of these data is not scalable to the spatial and temporal extents required to understand patterns at the population level. Only data collected from citizen science projects can gather sufficient quantities of data, but data collected from volunteers are inherently noisy and heterogeneous. Here we describe a 'Big Data' approach to improve the data quality in eBird, a global citizen science project that gathers bird observations. First, eBird's data submission design ensures that all data meet high standards of completeness and accuracy. Second, we take a 'sensor calibration' approach to measure individual variation in eBird participant's ability to detect and identify birds. Third, we use species distribution models to fill in data gaps. Finally, we provide examples of novel analyses exploring population-level patterns in bird distributions.

  8. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  9. LLNL's Big Science Capabilities Help Spur Over $796 Billion in U.S. Economic Activity Sequencing the Human Genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Jeffrey S.

    LLNL’s successful history of taking on big science projects spans beyond national security and has helped create billions of dollars per year in new economic activity. One example is LLNL’s role in helping sequence the human genome. Over $796 billion in new economic activity in over half a dozen fields has been documented since LLNL successfully completed this Grand Challenge.

  10. Earth Science Capability Demonstration Project

    NASA Technical Reports Server (NTRS)

    Cobleigh, Brent

    2006-01-01

    A viewgraph presentation reviewing the Earth Science Capability Demonstration Project is shown. The contents include: 1) ESCD Project; 2) Available Flight Assets; 3) Ikhana Procurement; 4) GCS Layout; 5) Baseline Predator B Architecture; 6) Ikhana Architecture; 7) UAV Capability Assessment; 8) The Big Picture; 9) NASA/NOAA UAV Demo (5/05 to 9/05); 10) NASA/USFS Western States Fire Mission (8/06); and 11) Suborbital Telepresence.

  11. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Evolution of the Air Toxics under the Big Sky Program

    ERIC Educational Resources Information Center

    Marra, Nancy; Vanek, Diana; Hester, Carolyn; Holian, Andrij; Ward, Tony; Adams, Earle; Knuth, Randy

    2011-01-01

    As a yearlong exploration of air quality and its relation to respiratory health, the "Air Toxics Under the Big Sky" program offers opportunities for students to learn and apply science process skills through self-designed inquiry-based research projects conducted within their communities. The program follows a systematic scope and sequence…

  13. From darwin to the census of marine life: marine biology as big science.

    PubMed

    Vermeulen, Niki

    2013-01-01

    With the development of the Human Genome Project, a heated debate emerged on biology becoming 'big science'. However, biology already has a long tradition of collaboration, as natural historians were part of the first collective scientific efforts: exploring the variety of life on earth. Such mappings of life still continue today, and if field biology is gradually becoming an important subject of studies into big science, research into life in the world's oceans is not taken into account yet. This paper therefore explores marine biology as big science, presenting the historical development of marine research towards the international 'Census of Marine Life' (CoML) making an inventory of life in the world's oceans. Discussing various aspects of collaboration--including size, internationalisation, research practice, technological developments, application, and public communication--I will ask if CoML still resembles traditional collaborations to collect life. While showing both continuity and change, I will argue that marine biology is a form of natural history: a specific way of working together in biology that has transformed substantially in interaction with recent developments in the life sciences and society. As a result, the paper does not only give an overview of transformations towards large scale research in marine biology, but also shines a new light on big biology, suggesting new ways to deepen the understanding of collaboration in the life sciences by distinguishing between different 'collective ways of knowing'.

  14. Forget the hype or reality. Big data presents new opportunities in Earth Science.

    NASA Astrophysics Data System (ADS)

    Lee, T. J.

    2015-12-01

    Earth science is arguably one of the most mature science discipline which constantly acquires, curates, and utilizes a large volume of data with diverse variety. We deal with big data before there is big data. For example, while developing the EOS program in the 1980s, the EOS data and information system (EOSDIS) was developed to manage the vast amount of data acquired by the EOS fleet of satellites. EOSDIS continues to be a shining example of modern science data systems in the past two decades. With the explosion of internet, the usage of social media, and the provision of sensors everywhere, the big data era has bring new challenges. First, Goggle developed the search algorithm and a distributed data management system. The open source communities quickly followed up and developed Hadoop file system to facility the map reduce workloads. The internet continues to generate tens of petabytes of data every day. There is a significant shortage of algorithms and knowledgeable manpower to mine the data. In response, the federal government developed the big data programs that fund research and development projects and training programs to tackle these new challenges. Meanwhile, comparatively to the internet data explosion, Earth science big data problem has become quite small. Nevertheless, the big data era presents an opportunity for Earth science to evolve. We learned about the MapReduce algorithms, in memory data mining, machine learning, graph analysis, and semantic web technologies. How do we apply these new technologies to our discipline and bring the hype to Earth? In this talk, I will discuss how we might want to apply some of the big data technologies to our discipline and solve many of our challenging problems. More importantly, I will propose new Earth science data system architecture to enable new type of scientific inquires.

  15. The Storyboard's Big Picture

    NASA Technical Reports Server (NTRS)

    Malloy, Cheryl A.; Cooley, William

    2003-01-01

    At Science Applications International Corporation (SAIC), Cape Canaveral Office, we're using a project management tool that facilitates team communication, keeps our project team focused, streamlines work and identifies potential issues. What did it cost us to install the tool? Almost nothing.

  16. Air Toxics under the Big Sky: A Real-World Investigation to Engage High School Science Students

    ERIC Educational Resources Information Center

    Adams, Earle; Smith, Garon; Ward, Tony J.; Vanek, Diana; Marra, Nancy; Jones, David; Henthorn, Melissa; Striebel, Jim

    2008-01-01

    This paper describes a problem-based chemistry education model in which students perform scientific research on a local environmentally relevant problem. The project is a collaboration among The University of Montana and local high schools centered around Missoula, Montana. "Air Toxics under the Big Sky" involves high school students in collecting…

  17. samiDB: A Prototype Data Archive for Big Science Exploration

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, I. S.; Green, A. W.; Cortese, L.; Foster, C.; Scott, N.

    2015-04-01

    samiDB is an archive, database, and query engine to serve the spectra, spectral hypercubes, and high-level science products that make up the SAMI Galaxy Survey. Based on the versatile Hierarchical Data Format (HDF5), samiDB does not depend on relational database structures and hence lightens the setup and maintenance load imposed on science teams by metadata tables. The code, written in Python, covers the ingestion, querying, and exporting of data as well as the automatic setup of an HTML schema browser. samiDB serves as a maintenance-light data archive for Big Science and can be adopted and adapted by science teams that lack the means to hire professional archivists to set up the data back end for their projects.

  18. Next Generation Workload Management and Analysis System for Big Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, Kaushik

    We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlingtonmore » (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.« less

  19. From ecological records to big data: the invention of global biodiversity.

    PubMed

    Devictor, Vincent; Bensaude-Vincent, Bernadette

    2016-12-01

    This paper is a critical assessment of the epistemological impact of the systematic quantification of nature with the accumulation of big datasets on the practice and orientation of ecological science. We examine the contents of big databases and argue that it is not just accumulated information; records are translated into digital data in a process that changes their meanings. In order to better understand what is at stake in the 'datafication' process, we explore the context for the emergence and quantification of biodiversity in the 1980s, along with the concept of the global environment. In tracing the origin and development of the global biodiversity information facility (GBIF) we describe big data biodiversity projects as a techno-political construction dedicated to monitoring a new object: the global diversity. We argue that, biodiversity big data became a powerful driver behind the invention of the concept of the global environment, and a way to embed ecological science in the political agenda.

  20. In response to an open invitation for comments on AAAS project 2061's Benchmark books on science. Part 1: documentation of serious errors in cell biology.

    PubMed

    Ling, Gilbert

    2006-01-01

    Project 2061 was founded by the American Association for the Advancement of Science (AAAS) to improve secondary school science education. An in-depth study of ten 9 to 12th grade biology textbooks led to the verdict that none conveyed "Big Ideas" that would give coherence and meaning to the profusion of lavishly illustrated isolated details. However, neither the Project report itself nor the Benchmark books put out earlier by the Project carries what deserves the designation of "Big Ideas." Worse, in the two earliest-published Benchmark books, the basic unit of all life forms--the living cell--is described as a soup enclosed by a cell membrane, that determines what can enter or leave the cell. This is astonishing since extensive experimental evidence has unequivocally disproved this idea 60 years ago. A "new" version of the membrane theory brought in to replace the discredited (sieve) version is the pump model--currently taught as established truth in all high-school and college biology textbooks--was also unequivocally disproved 40 years ago. This comment is written partly in response to Bechmark's gracious open invitation for ideas to improve the books and through them, to improve US secondary school science education.

  1. Mapping Stars with TI-83.

    ERIC Educational Resources Information Center

    Felsager, Bjorn

    2001-01-01

    Describes a mathematics and science project designed to help students gain some familiarity with constellations and trigonometry by using the TI-83 calculator as a tool. Specific constellations such as the Big Dipper (Plough) and other sets of stars are located using stereographic projection and graphed using scatterplots. (MM)

  2. 78 FR 37591 - Making the Most of Big Data: Request for Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... expanded collaboration between the public and private sectors. SUPPLEMENTARY INFORMATION: Overview: Aiming... collaboration between the public and private sectors. The Administration is particularly interested in projects... innovation projects across the country. Later this year, the Office of Science and Technology Policy (OSTP...

  3. WHK Interns Win Big at Frederick County Science Fair | Poster

    Cancer.gov

    Three Werner H. Kirsten student interns claimed awards at the 35th Annual Frederick County Science and Engineering Fair—and got a shot at the national competition—for imaginative projects that reached out to the rings of Saturn and down to the details of advanced cancer diagnostics.

  4. How Big is Earth?

    NASA Astrophysics Data System (ADS)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  5. The Truth about Wolves.

    ERIC Educational Resources Information Center

    Mannesto, Jean

    2002-01-01

    Reports on a project by a reading teacher that combines reading and science while debunking the myth of the big, bad wolf. Related activities include making animal tracks, writing, and measurement activities. (DDR)

  6. From Darwin to the Census of Marine Life: Marine Biology as Big Science

    PubMed Central

    Vermeulen, Niki

    2013-01-01

    With the development of the Human Genome Project, a heated debate emerged on biology becoming ‘big science’. However, biology already has a long tradition of collaboration, as natural historians were part of the first collective scientific efforts: exploring the variety of life on earth. Such mappings of life still continue today, and if field biology is gradually becoming an important subject of studies into big science, research into life in the world's oceans is not taken into account yet. This paper therefore explores marine biology as big science, presenting the historical development of marine research towards the international ‘Census of Marine Life’ (CoML) making an inventory of life in the world's oceans. Discussing various aspects of collaboration – including size, internationalisation, research practice, technological developments, application, and public communication – I will ask if CoML still resembles traditional collaborations to collect life. While showing both continuity and change, I will argue that marine biology is a form of natural history: a specific way of working together in biology that has transformed substantially in interaction with recent developments in the life sciences and society. As a result, the paper does not only give an overview of transformations towards large scale research in marine biology, but also shines a new light on big biology, suggesting new ways to deepen the understanding of collaboration in the life sciences by distinguishing between different ‘collective ways of knowing’. PMID:23342119

  7. The PACA Project: Convergence of Scientific Research, Social Media and Citizen Science in the Era of Astronomical Big Data

    NASA Astrophysics Data System (ADS)

    Yanamandra-Fisher, Padma A.

    2015-08-01

    The Pro-Am Collaborative Astronomy (PACA) project promotes and supports the professional-amateur astronomer collaboration in scientific research via social media and has been implemented in several comet observing campaigns. In 2014, two comet observing campaigns involving pro-am collaborations were initiated: (1) C/2013 A1 (C/SidingSpring) and (2) 67P/Churyumov-Gerasimenko (CG), target for ESA/Rosetta mission. The evolving need for individual customized observing campaigns has been incorporated into the evolution of The PACA Project that currently is focused on comets: from supporting observing campaigns of current comets, legacy data, historical comets; interconnected with social media and a set of shareable documents addressing observational strategies; consistent standards for data; data access, use, and storage, to align with the needs of professional observers in the era of astronmical big data. The empowerment of amateur astronomers vis-à-vis their partnerships with the professional scientists creates a new demographic of data scientists, enabling citizen science of the integrated data from both the professional and amateur communities.While PACA identifies a consistent collaborative approach to pro-am collaborations, given the volume of data generated for each campaign, new ways of rapid data analysis, mining access and storage are needed. Several interesting results emerged from the synergistic inclusion of both social media and amateur astronomers. The PACA Project is expanding to include pro-am collaborations on other solar system objects; allow for immersive outreach and include various types of astronomical communities, ranging from individuals, to astronmical societies and telescopic networks. Enabling citizen science research in the era of astronomical big data is a challenge which requires innovative approaches and integration of professional and amateur astronomers with data scientists and some examples of recent projects will be highlighted.

  8. Business and Science - Big Data, Big Picture

    NASA Astrophysics Data System (ADS)

    Rosati, A.

    2013-12-01

    Data Science is more than the creation, manipulation, and transformation of data. It is more than Big Data. The business world seems to have a hold on the term 'data science' and, for now, they define what it means. But business is very different than science. In this talk, I address how large datasets, Big Data, and data science are conceptually different in business and science worlds. I focus on the types of questions each realm asks, the data needed, and the consequences of findings. Gone are the days of datasets being created or collected to serve only one purpose or project. The trick with data reuse is to become familiar enough with a dataset to be able to combine it with other data and extract accurate results. As a Data Curator for the Advanced Cooperative Arctic Data and Information Service (ACADIS), my specialty is communication. Our team enables Arctic sciences by ensuring datasets are well documented and can be understood by reusers. Previously, I served as a data community liaison for the North American Regional Climate Change Assessment Program (NARCCAP). Again, my specialty was communicating complex instructions and ideas to a broad audience of data users. Before entering the science world, I was an entrepreneur. I have a bachelor's degree in economics and a master's degree in environmental social science. I am currently pursuing a Ph.D. in Geography. Because my background has embraced both the business and science worlds, I would like to share my perspectives on data, data reuse, data documentation, and the presentation or communication of findings. My experiences show that each can inform and support the other.

  9. Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Hook, Leslie A; Killeffer, Terri S

    The Online Metadata Editor (OME) is a web-based tool to help document scientific data in a well-structured, popular scientific metadata format. In this paper, we will discuss the newest tool that Oak Ridge National Laboratory (ORNL) has developed to generate, edit, and manage metadata and how it is helping data-intensive science centers and projects, such as the U.S. Department of Energy s Next Generation Ecosystem Experiments (NGEE) in the Arctic to prepare metadata and make their big data produce big science and lead to new discoveries.

  10. The Potential Improvement of Team-Working Skills in Biomedical and Natural Science Students Using a Problem-Based Learning Approach

    ERIC Educational Resources Information Center

    Nowrouzian, Forough L.; Farewell, Anne

    2013-01-01

    Teamwork has become an integral part of most organisations today, and it is clearly important in Science and other disciplines. In Science, research teams increase in size while the number of single-authored papers and patents decline. Team-work in laboratory sciences permits projects that are too big or complex for one individual to be tackled.…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shankar, Arjun

    Computer scientist Arjun Shankar is director of the Compute and Data Environment for Science (CADES), ORNL’s multidisciplinary big data computing center. CADES offers computing, networking and data analytics to facilitate workflows for both ORNL and external research projects.

  12. Icarus Investigations: A Model for Engaging Citizen Scientists to Solve Solar Big Data Challenges

    NASA Astrophysics Data System (ADS)

    Winter, H. D., III; Loftus, K.

    2017-12-01

    Solar data is growing at an exponential rate. NASA's Atmospheric Imaging Assembly (AIA) has produced a data volume of over 6 petabytes to date, and that volume is growing. The initial suite of instruments on DKIST are expected to generate approximately 25TB of data per day, with bursts up to 50TB. Making sense of this deluge of solar data is as formidable a task as collecting it. New techniques and new ways of thinking are needed in order to optimize the value of this immense amount of data. While machine learning algorithms are a natural tool to sift through Big Data, those tools need to be carefully constructed and trained in order to provide meaningful results. Trained volunteers are needed to provide a large volume of initial classifications in order to properly train machine learning algorithms. To retain a highly trained pool of volunteers to teach machine learning algorithms, we propose to host an ever-changing array of solar-based citizen science projects under a single collaborative project banner: Icarus Investigations. Icarus Investigations would build and retain a dedicated user base within Zooniverse, the most popular citizen science website with over a million registered users. Volunteers will become increasingly comfortable with solar images and solar features of interest as they work on projects that focus on a wide array of solar phenomena. Under a unified framework, new solar citizen science projects submitted to Icarus Investigations will build on the successes, and learn from the missteps, of their predecessors. In this talk we discuss the importance and benefits of engaging the public in citizen science projects and call for collaborators on future citizen science projects. We will also demonstrate the initial Icarus Investigations project, The Where of the Flare. This demonstration will allow us to highlight the workflow of a Icarus Investigations citizen science project with a concrete example.

  13. From Engineering Science to Big Science: The NACA and NASA Collier Trophy Research Project Winners

    NASA Technical Reports Server (NTRS)

    Mack, Pamela E. (Editor)

    1998-01-01

    The chapters of this book discuss a series of case studies of notable technological projects carried out at least in part by the NACA and NASA. The case studies chosen are those projects that won the National Aeronautic Association's (NAA) Collier Trophy for "the greatest achievement in aviation in America, the value of which has been thoroughly demonstrated by use during the preceding year." Looking back on the whole series of projects we can examine both what successes were seen as important at various times, and how the goals and organization of these notable projects changed over time.

  14. Managing the Big Data Avalanche in Astronomy - Data Mining the Galaxy Zoo Classification Database

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2014-01-01

    We will summarize a variety of data mining experiments that have been applied to the Galaxy Zoo database of galaxy classifications, which were provided by the volunteer citizen scientists. The goal of these exercises is to learn new and improved classification rules for diverse populations of galaxies, which can then be applied to much larger sky surveys of the future, such as the LSST (Large Synoptic Sky Survey), which is proposed to obtain detailed photometric data for approximately 20 billion galaxies. The massive Big Data that astronomy projects will generate in the future demand greater application of data mining and data science algorithms, as well as greater training of astronomy students in the skills of data mining and data science. The project described here has involved several graduate and undergraduate research assistants at George Mason University.

  15. Big Science, Team Science, and Open Science for Neuroscience.

    PubMed

    Koch, Christof; Jones, Allan

    2016-11-02

    The Allen Institute for Brain Science is a non-profit private institution dedicated to basic brain science with an internal organization more commonly found in large physics projects-large teams generating complete, accurate and permanent resources for the mouse and human brain. It can also be viewed as an experiment in the sociology of neuroscience. We here describe some of the singular differences to more academic, PI-focused institutions. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Think Big! The Human Condition Project

    ERIC Educational Resources Information Center

    Metcalfe, Gareth

    2014-01-01

    How can educators provide children with a genuine experience of carrying out an extended scientific investigation? And can teachers change the perception of what it means to be a scientist? These were key questions that lay behind "The Human Condition" project, an initiative funded by the Primary Science Teaching Trust to explore a new…

  17. Wideband linear-to-circular polarization conversion realized by a transmissive anisotropic metasurface

    NASA Astrophysics Data System (ADS)

    Lin, Bao-Qin; Guo, Jian-Xin; Huang, Bai-Gang; Fang, Lin-Bo; Chu, Peng; Liu, Xiang-Wen

    2018-05-01

    Not Available Project supported by the National Natural Science Foundation of China (Grant No. 61471387) and the Research Center for Internet of Things and Big Data Technology of Xijing University, China.

  18. Understanding forest ecology from the landscape to the project level

    Treesearch

    Ward McCaughey

    2007-01-01

    Several researchers in the Forestry Sciences Laboratory have been actively involved in BEMRP since its inception in the early 1990s. The recent research on the Trapper Bunkhouse Land Stewardship Project began in 2004. In ecosystem management, sometimes we need to look at the big picture, or the landscape scale, and sometimes we need to work on a more local, or project-...

  19. Big Ideas in Volcanology-a new way to teach and think about the subject?

    NASA Astrophysics Data System (ADS)

    Rose, W. I.

    2011-12-01

    As intense work with identifying and presenting earth science to middle school science teachers in the MiTEP project advances, I have realized that tools used to connect with teachers and students of earth science in general and especially to promote higher levels of learning, should be advantageous in graduate teaching as well. In my last of 40 years of teaching graduate volcanology, I have finally organized the class around ideas based on Earth Science Literacy Principles and on common misconceptions. As such, I propose and fully explore the twelve "big ideas" of volcanology at the rate of one per week. This curricular organization highlights the ideas in volcanology that have major impact beyond volcanology itself and explores the roots and global ramifications of these ideas. Together they show how volcanology interfaces with the science world and the "real" world or how volcanologists interface with "real" people. In addition to big ideas we explore difficult and misunderstood concepts and the public misconceptions associated with each. The new organization and its focus on understanding relevant and far reaching concepts and hypotheses provides a refreshing context for advanced learning. It is planned to be the basis for an interactive website.

  20. Data-driven medicinal chemistry in the era of big data.

    PubMed

    Lusher, Scott J; McGuire, Ross; van Schaik, René C; Nicholson, C David; de Vlieg, Jacob

    2014-07-01

    Science, and the way we undertake research, is changing. The increasing rate of data generation across all scientific disciplines is providing incredible opportunities for data-driven research, with the potential to transform our current practices. The exploitation of so-called 'big data' will enable us to undertake research projects never previously possible but should also stimulate a re-evaluation of all our data practices. Data-driven medicinal chemistry approaches have the potential to improve decision making in drug discovery projects, providing that all researchers embrace the role of 'data scientist' and uncover the meaningful relationships and patterns in available data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Findability of Federal Research Data

    NASA Astrophysics Data System (ADS)

    Hourcle, J. A.

    2013-12-01

    Findability of Federal Research Data Although many of the federal agencies have been providing access to scientific research data for years if not decades, the findability of the data has been quite lacking. Many discipline-wide efforts have been made in the big science communities, such as PDS for planetary science and the VOs in night time astronomy and heliophysics, but there is a lack of single entry point for someone looking for data. The science.gov website contains links to many of these big-science search systems, but doesn't differentiate between links to science quality data and websites or browse products, making it more difficult to search specifically for data. The data.gov website is a useful repository for PIs of small science data to stash their data, particularly as it allows for interested parties to interact with tabular data. Unfortunately, as each group thinks of their data differently, much of what's now in the system is a mess; collections of data being tracked as individual records with no relationships between them. Big science projects also get tracked as single records, potentially with only a single record for missions with multiple instruments and significantly different data series. We present recommendations on how to improve the findability of federal research data on data.gov, based on years of working on the Virtual Solar Observatory and withing the science informatics community.

  2. Motivation of synthesis, with an example on groundwater quality sustainability

    NASA Astrophysics Data System (ADS)

    Fogg, G. E.; Labolle, E. M.

    2007-12-01

    Synthesis of ideas and theories from disparate disciplines is necessary for addressing the major problems faced by society. Such integration happens neither via edict nor via lofty declarations of what is needed or what is best. It happens mainly through two mechanisms: limited scope collaborations (e.g., ~2-3 investigators) in which the researchers believe deeply in their need for each other's expertise and much larger scope collaborations driven by the 'big idea.' Perhaps the strongest motivation for broad, effective synthesis is the 'big idea' that is sufficiently important and inspiring to marshal the appropriate collaborative efforts. Examples include the Manhattan Project, the quest for cancer cures, predicting effects of climate change, and groundwater quality sustainability. The latter is posed as an example of a 'big idea' that would potentially unify research efforts in both the sciences and social sciences toward a common, pressing objective.

  3. Modern data science for analytical chemical data - A comprehensive review.

    PubMed

    Szymańska, Ewa

    2018-10-22

    Efficient and reliable analysis of chemical analytical data is a great challenge due to the increase in data size, variety and velocity. New methodologies, approaches and methods are being proposed not only by chemometrics but also by other data scientific communities to extract relevant information from big datasets and provide their value to different applications. Besides common goal of big data analysis, different perspectives and terms on big data are being discussed in scientific literature and public media. The aim of this comprehensive review is to present common trends in the analysis of chemical analytical data across different data scientific fields together with their data type-specific and generic challenges. Firstly, common data science terms used in different data scientific fields are summarized and discussed. Secondly, systematic methodologies to plan and run big data analysis projects are presented together with their steps. Moreover, different analysis aspects like assessing data quality, selecting data pre-processing strategies, data visualization and model validation are considered in more detail. Finally, an overview of standard and new data analysis methods is provided and their suitability for big analytical chemical datasets shortly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Big Data as catalyst for change in Astronomy Libraries - Indian Scenario

    NASA Astrophysics Data System (ADS)

    Birdie, Christina

    2015-08-01

    Research in Astronomy fosters exciting missions and encourages libraries to engage themselves in big budget astronomy programs which are the flagship projects for most of the astronomers. The scholarly communication resulting from analyzing Big Data has led to new opportunities for Astronomy librarians to become involved in the management of publications more intelligently. In India the astronomers have committed their participation in the mega ‘TMT’ (Thirty Meter Telescope) project, which is an international partnership science program between Caltech, University of California, Canada, Japan, China and India. Participation in the TMT project will provide Indian astronomers an opportunity to carryout frontline research in astronomy. Within India, there are three major astronomy research institutes, namely, Indian Institute of Astrophysics (IIA), Inter-University center for Astronomy & Astrophysics (IUCAA), & Aryabhatta Research Institute of Observational sciences (ARIES) are stake holders in this program along with Indian Government as veuture capitalist. This study will examine the potential publishing pattern of those astronomers and technologists within India, with special focus to those three institutes. The indications of already existing collaborations among them, the expertise in instrument building, display of software development skills and cutting edge research capability etc. can be derived from analyzing their publications for the last ten years. Attempt also will be made to examine the in-house technical reports, newsletters,conference presentations etc. from these three institutes with a view to highlight the hidden potential skills and the possible collaboration among the Indian astronomers expressed from the grey literature.The incentive to make the astronomy libraries network stronger within India, may evolve from the findings and future requirements. As this project is deemed to be the national project with the financial support from science & technology department of Government of India, the libraries participating have excellent opportunity to showcase their capabilities and make an impact in the national level.

  5. The Art of Science

    NASA Astrophysics Data System (ADS)

    Vaidya, Ashwin; Munakata, Mika

    2014-03-01

    The Art of Science project at Montclair State University strives to communicate the creativity inherent in the sciences to students and the general public alike. The project uses connections between the arts and sciences to show the underlying unity and interdependence of the two. The project is planned as one big `performance' bringing together the two disciplines around the theme of sustainability. In the first phase, physics students learned about and built human-powered generators including hand cranks and bicycle units. In the second phase, using the generators to power video cameras, art students worked with a visiting artist to make short films on the subject of sustainability, science, and art. The generators and films were showcased at an annual university Physics and Art exhibition which was open to the university and local community. In the final phase, to be conducted, K12 teachers will learn about the project through a professional development workshop and will be encouraged to adapt the experiment for their own classrooms. The last phase will also combine the university and K12 projects for an exhibition to be displayed on Earth Day, 2014. Project funded by the APS Outreach Grant.

  6. Near-Space Science: A Ballooning Project to Engage Students with Space beyond the Big Screen

    ERIC Educational Resources Information Center

    Hike, Nina; Beck-Winchatz, Bernhard

    2015-01-01

    Many students probably know something about space from playing computer games or watching movies and TV shows. Teachers can expose them to the real thing by launching their experiments into near space on a weather balloon. This article describes how to use high-altitude ballooning (HAB) as a culminating project to a chemistry unit on experimental…

  7. We Must Invest in Applied Knowledge of Computational Neurosciences and Neuroinformatics as an Important Future in Malaysia: The Malaysian Brain Mapping Project

    PubMed Central

    Sumari, Putra; Idris, Zamzuri; Abdullah, Jafri Malin

    2017-01-01

    The Academy of Sciences Malaysia and the Malaysian Industry-Government group for High Technology has been working hard to project the future of big data and neurotechnology usage up to the year 2050. On the 19 September 2016, the International Brain Initiative was announced by US Under Secretary of State Thomas Shannon at a meeting that accompanied the United Nations’ General Assembly in New York City. This initiative was seen as an important effort but deemed costly for developing countries. At a concurrent meeting hosted by the US National Science Foundation at Rockefeller University, numerous countries discussed this massive project, which would require genuine collaboration between investigators in the realms of neuroethics. Malaysia’s readiness to embark on using big data in the field of brain, mind and neurosciences is to prepare for the 4th Industrial Revolution which is an important investment for the country’s future. The development of new strategies has also been encouraged by the involvement of the Society of Brain Mapping and Therapeutics, USA and the International Neuroinformatics Coordinating Facility. PMID:28381924

  8. We Must Invest in Applied Knowledge of Computational Neurosciences and Neuroinformatics as an Important Future in Malaysia: The Malaysian Brain Mapping Project.

    PubMed

    Sumari, Putra; Idris, Zamzuri; Abdullah, Jafri Malin

    2017-03-01

    The Academy of Sciences Malaysia and the Malaysian Industry-Government group for High Technology has been working hard to project the future of big data and neurotechnology usage up to the year 2050. On the 19 September 2016, the International Brain Initiative was announced by US Under Secretary of State Thomas Shannon at a meeting that accompanied the United Nations' General Assembly in New York City. This initiative was seen as an important effort but deemed costly for developing countries. At a concurrent meeting hosted by the US National Science Foundation at Rockefeller University, numerous countries discussed this massive project, which would require genuine collaboration between investigators in the realms of neuroethics. Malaysia's readiness to embark on using big data in the field of brain, mind and neurosciences is to prepare for the 4th Industrial Revolution which is an important investment for the country's future. The development of new strategies has also been encouraged by the involvement of the Society of Brain Mapping and Therapeutics, USA and the International Neuroinformatics Coordinating Facility.

  9. "But Aren't Diesel Engines Just for Big, Smelly Trucks?" An Interdisciplinary Curriculum Project for High School Chemistry Students

    ERIC Educational Resources Information Center

    Zoellner, Brian P.; Chant, Richard H.; Wood, Kelly

    2014-01-01

    In a collaboration between the University of North Florida College of Education and Human Services and Sandalwood High School in Duval County, Florida, social studies and science education professors and a science teacher worked together to develop student understanding about the limited use of diesel-fueled cars in the United States when compared…

  10. PANGAEA® - Data Publisher for Earth & Environmental Science - Research data enters scholarly communication and big data analysis

    NASA Astrophysics Data System (ADS)

    Diepenbroek, Michael; Schindler, Uwe; Riedel, Morris; Huber, Robert

    2014-05-01

    The ISCU World Data Center PANGAEA is an information system for acquisition, processing, long term storage, and publication of geo-referenced data related to earth science fields. Storing more than 350.000 data sets from all fields of geosciences it belongs to the largest archives for observational earth science data. Standard conform interfaces (ISO, OGC, W3C, OAI) enable access from a variety of data and information portals, among them the search engine of PANGAEA itself ((www.pangaea.de) and e.g. GBIF. All data sets in PANGAEA are citable, fully documented, and can be referenced via persistent identifiers (Digital Object Identifier - DOI) - a premise for data publication. Together with other ICSU World Data Centers (www.icsu-wds.org) and the Technical Information Library in Germany (TIB) PANGAEA had a share in the implementation of a DOI based registry for scientific data, which by now is supported by a worldwide consortium of libraries (www.datacite.org). A further milestone was building up strong co-operations with science publishers as Elsevier, Springer, Wiley, AGU, Nature and others. A common web service allows to reference supplementary data in PANGAEA directly from an articles abstract page (e.g. Science Direct). The next step with science publishers is to further integrate the editorial process for the publication of supplementary data with the publication procedures on the journal side. Data centric research efforts such as environmental modelling or big data analysing approaches represent new challenges for PANGAEA. Integrated data warehouse technologies are used for highly efficient retrievals and compilations of time slices or surface data matrixes on any measurement parameters out of the whole data continuum. Further, new and emerging big data approaches are currently investigated within PANGAEA to e.g. evaluate its usability for quality control or data clustering. PANGAEA is operated as a joint long term facility by MARUM at the University Bremen and the Alfred Wegener Institute for Polar and Marine Research (AWI). More than 80% of the funding results from project data management and the implementation of spatial data infrastructures (more than 160 International to national projects since the last 15 years - www.pangaea.de/projects).

  11. Small Bodies, Big Concepts: Engaging Teachers and Their Students in Visual Analysis of Comets and Asteroids

    NASA Astrophysics Data System (ADS)

    Cobb, W. H.; Buxner, S.; Lebofsky, L. A.; Ristvey, J.; Weeks, S.; Zolensky, M.

    2011-12-01

    Small Bodies, Big Concepts is a multi-disciplinary, professional development project that engages 5th - 8th grade teachers in high end planetary science using a research-based pedagogical framework, Designing Effective Science Instruction (DESI). In addition to developing sound background knowledge with a focus on visual analysis, teachers' awareness of the process of learning new content is heightened, and they use that experience to deepen their science teaching practice. Culling from NASA E/PO educational materials, activities are sequenced to enhance conceptual understanding of big ideas in space science: what do we know, how do we know it, why do we care? Helping teachers develop a picture of the history and evolution of our understanding of the solar system, and honing in on the place of comets and asteroids in helping us answer old questions and discover new ones, teachers see the power and excitement underlying planetary science as human endeavor. Research indicates that science inquiry is powerful in the classroom and mission scientists are real-life models of science inquiry in action. Using guest scientist facilitators from the Planetary Science Institute, NASA Johnson Space Center, Lockheed Martin, and NASA E/PO professionals from McREL and NASA AESP, teachers practice framing scientific questions, using current visual data, and adapting NASA E/PO activities related to current exploration of asteroids and comets in our Solar System. Cross-curricular elements included examining research-based strategies for enhancing English language learners' ability to engage in higher order questions and a professional astronomy artist's insight into how visual analysis requires not just our eyes engaged, but our brains: comparing, synthesizing, questioning, evaluating, and wondering. This summer we pilot tested the SBBC curriculum with thirteen 5th- 10th grade teachers modeling a variety of instructional approaches over eight days. Each teacher developed lesson plans that incorporate DESI strategies with new space science content to implement during the coming year in their classroom. Initial evaluation of the workshop showed that teachers left with an increased understanding of small bodies in the solar system, current exploration, and ways to integrate this exploration into their current curriculum. We will reconvene the teachers in the spring of 2012 to share their implementation experiences. The professional development is a year-long effort, supported both online and through future face-to-face workshops. Next summer there will be a field test of the project will be implemented after evaluation data informs best steps for improvement. The result of the project will be a model for implementing professional development that integrates research-based instructional strategies and science findings from NASA missions to improve teacher practice. Small Bodies, BIG Concepts is based upon work supported by the National Aeronautics and Space Administration (NASA) under Grant/Contract/Agreement No. 09-EPOESS09-0044 issued through the Science Mission Directorate.

  12. Controversies in the Hydrosphere: an iBook exploring current global water issues for middle school classrooms

    NASA Astrophysics Data System (ADS)

    Dufoe, A.; Guertin, L. A.

    2012-12-01

    This project looks to help teachers utilize iPad technology in their classrooms as an instructional tool for Earth system science and connections to the Big Ideas in Earth Science. The project is part of Penn State University's National Science Foundation (NSF) Targeted Math Science Partnership grant, with one goal of the grant to help current middle school teachers across Pennsylvania engage students with significant and complex questions of Earth science. The free Apple software iBooks Author was used to create an electronic book for the iPad, focusing on a variety of controversial issues impacting the hydrosphere. The iBook includes image slideshows, embedded videos, interactive images and quizzes, and critical thinking questions along Bloom's Taxonomic Scale of Learning Objectives. Outlined in the introductory iBook chapters are the Big Ideas of Earth System Science and an overview of Earth's spheres. Since the book targets the hydrosphere, each subsequent chapter focuses on specific water issues, including glacial melts, aquifer depletion, coastal oil pollution, marine debris, and fresh-water chemical contamination. Each chapter is presented in a case study format that highlights the history of the issue, the development and current status of the issue, and some solutions that have been generated. The next section includes critical thinking questions in an open-ended discussion format that focus on the Big Ideas, proposing solutions for rectifying the situation, and/or assignments specifically targeting an idea presented in the case study chapter. Short, comprehensive multiple-choice quizzes are also in each chapter. Throughout the iBook, students are free to watch videos, explore the content and form their own opinions. As a result, this iBook fulfills the grant objective by engaging teachers and students with an innovative technological presentation that incorporates Earth system science with current case studies regarding global water issues.

  13. Enabling a new Paradigm to Address Big Data and Open Science Challenges

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Mohan; Fisher, Ward

    2017-04-01

    Data are not only the lifeblood of the geosciences but they have become the currency of the modern world in science and society. Rapid advances in computing, communi¬cations, and observational technologies — along with concomitant advances in high-resolution modeling, ensemble and coupled-systems predictions of the Earth system — are revolutionizing nearly every aspect of our field. Modern data volumes from high-resolution ensemble prediction/projection/simulation systems and next-generation remote-sensing systems like hyper-spectral satellite sensors and phased-array radars are staggering. For example, CMIP efforts alone will generate many petabytes of climate projection data for use in assessments of climate change. And NOAA's National Climatic Data Center projects that it will archive over 350 petabytes by 2030. For researchers and educators, this deluge and the increasing complexity of data brings challenges along with the opportunities for discovery and scientific breakthroughs. The potential for big data to transform the geosciences is enormous, but realizing the next frontier depends on effectively managing, analyzing, and exploiting these heterogeneous data sources, extracting knowledge and useful information from heterogeneous data sources in ways that were previously impossible, to enable discoveries and gain new insights. At the same time, there is a growing focus on the area of "Reproducibility or Replicability in Science" that has implications for Open Science. The advent of cloud computing has opened new avenues for not only addressing both big data and Open Science challenges to accelerate scientific discoveries. However, to successfully leverage the enormous potential of cloud technologies, it will require the data providers and the scientific communities to develop new paradigms to enable next-generation workflows and transform the conduct of science. Making data readily available is a necessary but not a sufficient condition. Data providers also need to give scientists an ecosystem that includes data, tools, workflows and other services needed to perform analytics, integration, interpretation, and synthesis - all in the same environment or platform. Instead of moving data to processing systems near users, as is the tradition, the cloud permits one to bring processing, computing, analysis and visualization to data - so called data proximate workbench capabilities, also known as server-side processing. In this talk, I will present the ongoing work at Unidata to facilitate a new paradigm for doing science by offering a suite of tools, resources, and platforms to leverage cloud technologies for addressing both big data and Open Science/reproducibility challenges. That work includes the development and deployment of new protocols for data access and server-side operations and Docker container images of key applications, JupyterHub Python notebook tools, and cloud-based analysis and visualization capability via the CloudIDV tool to enable reproducible workflows and effectively use the accessed data.

  14. Bright Lights: Big Experiments! A public engagement activity for international year of light

    NASA Astrophysics Data System (ADS)

    Downie, Jonathan; Morton, Jonathan A. S.; McCoustra, Martin R. S.

    2017-01-01

    The Bright Lights: Big Experiments! public engagement project enabled high school students Scottish S2 to prepare a short, 5 min video using their own words and in their own style to present a scientific experiment on the theme of light to their contemporaries via YouTube. This paper describes the various experiments that we chose to deliver and our experiences in delivering them to our partner schools. The results of pre- and post-activity surveys of both the pupils and teachers are presented in an effort to understand the impact of the project on the students, staff and their schools. The quality of the final video product is shown to be a key factor, increasing the pupils’ likelihood of pursuing science courses and participating in further science engagement activities. Analysis of the evaluation methods used indicate the need for more sensitive tools to provide further insight into the impact of this type of engagement activity.

  15. A Tour of Big Data, Open Source Data Management Technologies from the Apache Software Foundation

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2012-12-01

    The Apache Software Foundation, a non-profit foundation charged with dissemination of open source software for the public good, provides a suite of data management technologies for distributed archiving, data ingestion, data dissemination, processing, triage and a host of other functionalities that are becoming critical in the Big Data regime. Apache is the world's largest open source software organization, boasting over 3000 developers from around the world all contributing to some of the most pervasive technologies in use today, from the HTTPD web server that powers a majority of Internet web sites to the Hadoop technology that is now projected at over a $1B dollar industry. Apache data management technologies are emerging as de facto off-the-shelf components for searching, distributing, processing and archiving key science data sets both geophysical, space and planetary based, all the way to biomedicine. In this talk, I will give a virtual tour of the Apache Software Foundation, its meritocracy and governance structure, and also its key big data technologies that organizations can take advantage of today and use to save cost, schedule, and resources in implementing their Big Data needs. I'll illustrate the Apache technologies in the context of several national priority projects, including the U.S. National Climate Assessment (NCA), and in the International Square Kilometre Array (SKA) project that are stretching the boundaries of volume, velocity, complexity, and other key Big Data dimensions.

  16. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  17. Value Added: History of Physics in a ``Science, Technology, and Society'' General Education Undergraduate Course

    NASA Astrophysics Data System (ADS)

    Neuenschwander, Dwight

    2016-03-01

    In thirty years of teaching a capstone ``Science, Technology, and Society'' course to undergraduate students of all majors, I have found that, upon entering STS, to most of them the Manhattan Project seems about as remote as the Civil War; few can describe the difference between nuclear and large non-nuclear weapons. With similar lack of awareness, many students seem to think the Big Bang was dreamed up by science sorcerers. One might suppose that a basic mental picture of weapons that held entire populations hostage should be part of informed citizenship. One might also suppose that questions about origins, as they are put to nature through evidence-based reasoning, should be integral to a culture's identity. Over the years I have found the history of physics to be an effective tool for bringing such subjects to life for STS students. Upon hearing some of the history behind (for example) nuclear weapons and big bang cosmology, these students can better imagine themselves called upon to help in a Manhattan Project, or see themselves sleuthing about in a forensic science like cosmology. In this talk I share sample student responses to our class discussions on nuclear weapons, and on cosmology. The history of physics is too engaging to be appreciated only by physicists.

  18. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  19. The EarthServer project: Exploiting Identity Federations, Science Gateways and Social and Mobile Clients for Big Earth Data Analysis

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Messina, Antonio; Pappalardo, Marco; Passaro, Gianluca

    2013-04-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. Six Lighthouse Applications are being established in EarthServer, each of which poses distinct challenges on Earth Data Analytics: Cryospheric Science, Airborne Science, Atmospheric Science, Geology, Oceanography, and Planetary Science. Altogether, they cover all Earth Science domains; the Planetary Science use case has been added to challenge concepts and standards in non-standard environments. In addition, EarthLook (maintained by Jacobs University) showcases use of OGC standards in 1D through 5D use cases. In this contribution we will report on the first applications integrated in the EarthServer Science Gateway and on the clients for mobile appliances developed to access them. We will also show how federated and social identity services can allow Big Earth Data Providers to expose their data in a distributed environment keeping a strict and fine-grained control on user authentication and authorisation. The degree of fulfilment of the EarthServer implementation with the recommendations made in the recent TERENA Study on AAA Platforms For Scientific Resources in Europe (https://confluence.terena.org/display/aaastudy/AAA+Study+Home+Page) will also be assessed.

  20. Power to the People: Addressing Big Data Challenges in Neuroscience by Creating a New Cadre of Citizen Neuroscientists.

    PubMed

    Roskams, Jane; Popović, Zoran

    2016-11-02

    Global neuroscience projects are producing big data at an unprecedented rate that informatic and artificial intelligence (AI) analytics simply cannot handle. Online games, like Foldit, Eterna, and Eyewire-and now a new neuroscience game, Mozak-are fueling a people-powered research science (PPRS) revolution, creating a global community of "new experts" that over time synergize with computational efforts to accelerate scientific progress, empowering us to use our collective cerebral talents to drive our understanding of our brain. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. KNMI DataLab experiences in serving data-driven innovations

    NASA Astrophysics Data System (ADS)

    Noteboom, Jan Willem; Sluiter, Raymond

    2016-04-01

    Climate change research and innovations in weather forecasting rely more and more on (Big) data. Besides increasing data from traditional sources (such as observation networks, radars and satellites), the use of open data, crowd sourced data and the Internet of Things (IoT) is emerging. To deploy these sources of data optimally in our services and products, KNMI has established a DataLab to serve data-driven innovations in collaboration with public and private sector partners. Big data management, data integration, data analytics including machine learning and data visualization techniques are playing an important role in the DataLab. Cross-domain data-driven innovations that arise from public-private collaborative projects and research programmes can be explored, experimented and/or piloted by the KNMI DataLab. Furthermore, advice can be requested on (Big) data techniques and data sources. In support of collaborative (Big) data science activities, scalable environments are offered with facilities for data integration, data analysis and visualization. In addition, Data Science expertise is provided directly or from a pool of internal and external experts. At the EGU conference, gained experiences and best practices are presented in operating the KNMI DataLab to serve data-driven innovations for weather and climate applications optimally.

  2. Creation a Geo Big Data Outreach and Training Collaboratory for Wildfire Community

    NASA Astrophysics Data System (ADS)

    Altintas, I.; Sale, J.; Block, J.; Cowart, C.; Crawl, D.

    2015-12-01

    A major challenge for the geoscience community is the training and education of current and next generation big data geoscientists. In wildfire research, there are an increasing number of tools, middleware and techniques to use for data science related to wildfires. The necessary computing infrastructures are often within reach and most of the software tools for big data are freely available. But what has been lacking is a transparent platform and training program to produce data science experts who can use these integrated tools effectively. Scientists well versed to take advantage of big data technologies in geoscience applications is of critical importance to the future of research and knowledge advancement. To address this critical need, we are developing learning modules to teach process-based thinking to capture the value of end-to-end systems of reusable blocks of knowledge and integrate the tools and technologies used in big data analysis in an intuitive manner. WIFIRE is an end-to-end cyberinfrastructure for dynamic data-driven simulation, prediction and visualization of wildfire behavior.To this end, we are openly extending an environment we have built for "big data training" (biobigdata.ucsd.edu) to similar MOOC-based approaches to the wildfire community. We are building an environment that includes training modules for distributed platforms and systems, Big Data concepts, and scalable workflow tools, along with other basics of data science including data management, reproducibility and sharing of results. We also plan to provide teaching modules with analytical and dynamic data-driven wildfire behavior modeling case studies which address the needs not only of standards-based K-12 science education but also the needs of a well-educated and informed citizenry.Another part our outreach mission is to educate our community on all aspects of wildfire research. One of the most successful ways of accomplishing this is through high school and undergraduate student internships. Students have worked closely with WIFIRE researchers on various projects including the development of statistical models of wildfire ignition probabilities for southern California, and the development of a smartphone app for crowd-sourced wildfire reporting through social networks such as Twitter and Facebook.

  3. Diverse Grains in Mars Sandstone Target Big Arm

    NASA Image and Video Library

    2015-07-01

    This view of a sandstone target called "Big Arm" covers an area about 1.3 inches (33 millimeters) wide in detail that shows differing shapes and colors of sand grains in the stone. Three separate images taken by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, at different focus settings, were combined into this focus-merge view. The Big Arm target on lower Mount Sharp is at a location near "Marias Pass" where a mudstone bedrock is in contact with overlying sandstone bedrock. MAHLI recorded the component images on May 29, 2015, during the 999th Martian day, or sol, of Curiosity's work on Mars. The rounded shape of some grains visible here suggests they traveled long distances before becoming part of the sediment that later hardened into sandstone. Other grains are more angular and may have originated closer to the rock's current location. Lighter and darker grains may have different compositions. MAHLI was built by Malin Space Science Systems, San Diego. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. http://photojournal.jpl.nasa.gov/catalog/PIA19677

  4. Authentic Research Experience and "Big Data" Analysis in the Classroom: Maize Response to Abiotic Stress

    ERIC Educational Resources Information Center

    Makarevitch, Irina; Frechette, Cameo; Wiatros, Natalia

    2015-01-01

    Integration of inquiry-based approaches into curriculum is transforming the way science is taught and studied in undergraduate classrooms. Incorporating quantitative reasoning and mathematical skills into authentic biology undergraduate research projects has been shown to benefit students in developing various skills necessary for future…

  5. Big Science and Big Big Science

    ERIC Educational Resources Information Center

    Marshall, Steve

    2012-01-01

    In his introduction to the science shows feature in "Primary Science" 115, Ian B. Dunne asks the question "Why have science shows?" He lists a host of very sound reasons, starting with because "science is fun" so why not engage and entertain, inspire, grab attention and encourage them to learn? He goes onto to state that: "Even in today's…

  6. Nursing Needs Big Data and Big Data Needs Nursing.

    PubMed

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  7. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    PubMed

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  8. A big data approach for climate change indicators processing in the CLIP-C project

    NASA Astrophysics Data System (ADS)

    D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni

    2016-04-01

    Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.

  9. Most science spared big budget bite

    NASA Astrophysics Data System (ADS)

    Richman, Barbara T.

    Most science budgets emerged unscathed from President Ronald Reagan's fiscal 1983 budget proposal. Total funding for research and development came out slightly ahead of inflation, as did funding for basic research (Eos, February 16, p. 162). The National Science Foundation (NSF) edged past the projected 7.3% inflation rate for 1982, and the National Aeronautics and Space Administration (NASA) budget is to be increased by 10.6%. However, the U.S. Geological Survey (USGS) is budgeted for a 4.2% increase in funding, and the National Oceanic and Atmospheric Administration (NOAA) will take an 8.3% cut.

  10. Big data, open science and the brain: lessons learned from genomics.

    PubMed

    Choudhury, Suparna; Fishman, Jennifer R; McGowan, Michelle L; Juengst, Eric T

    2014-01-01

    The BRAIN Initiative aims to break new ground in the scale and speed of data collection in neuroscience, requiring tools to handle data in the magnitude of yottabytes (10(24)). The scale, investment and organization of it are being compared to the Human Genome Project (HGP), which has exemplified "big science" for biology. In line with the trend towards Big Data in genomic research, the promise of the BRAIN Initiative, as well as the European Human Brain Project, rests on the possibility to amass vast quantities of data to model the complex interactions between the brain and behavior and inform the diagnosis and prevention of neurological disorders and psychiatric disease. Advocates of this "data driven" paradigm in neuroscience argue that harnessing the large quantities of data generated across laboratories worldwide has numerous methodological, ethical and economic advantages, but it requires the neuroscience community to adopt a culture of data sharing and open access to benefit from them. In this article, we examine the rationale for data sharing among advocates and briefly exemplify these in terms of new "open neuroscience" projects. Then, drawing on the frequently invoked model of data sharing in genomics, we go on to demonstrate the complexities of data sharing, shedding light on the sociological and ethical challenges within the realms of institutions, researchers and participants, namely dilemmas around public/private interests in data, (lack of) motivation to share in the academic community, and potential loss of participant anonymity. Our paper serves to highlight some foreseeable tensions around data sharing relevant to the emergent "open neuroscience" movement.

  11. Big Data Science Cafés: High School Students Experiencing Real Research with Scientists

    NASA Astrophysics Data System (ADS)

    Walker, C. E.; Pompea, S. M.

    2017-12-01

    The Education and Public Outreach group at the National Optical Astronomy Observatory has designed an outside-of-school education program to excite the interest of talented youth in future projects like the Large Synoptic Survey Telescope (LSST) and the NOAO (archival) Data Lab - their data approaches and key science projects. Originally funded by the LSST Corporation, the program cultivates talented youth to enter STEM disciplines and serves as a model to disseminate to the 40+ institutions involved in LSST. One Saturday a month during the academic year, high school students have the opportunity to interact with expert astronomers who work with large astronomical data sets in their scientific work. Students learn about killer asteroids, the birth and death of stars, colliding galaxies, the structure of the universe, gravitational waves, dark energy, dark matter, and more. The format for the Saturday science cafés has been a short presentation, discussion (plus food), computer lab activity and more discussion. They last about 2.5 hours and have been planned by a group of interested local high school students, an undergraduate student coordinator, the presenting astronomers, the program director and an evaluator. High school youth leaders help ensure an enjoyable and successful program for fellow students. They help their fellow students with the activities and help evaluate how well the science café went. Their remarks shape the next science café and improve the program. The experience offers youth leaders ownership of the program, opportunities to take on responsibilities and learn leadership and communication skills, as well as foster their continued interests in STEM. The prototype Big Data Science Academy was implemented successfully in the Spring 2017 and engaged almost 40 teens from greater Tucson in the fundamentals of astronomy concepts and research. As with any first implementation there were bumps. However, staff, scientists, and student leaders all stepped up to make the program a success. The project achieved many of its goals with a relatively small budget, providing value not only to the student leaders and student attendees, but to the scientists and staff as well. Staff learned what worked and what needed more fine-tuning to successfully launch and run a big data academy for teens in the years that follow.

  12. Geologic map of Big Bend National Park, Texas

    USGS Publications Warehouse

    Turner, Kenzie J.; Berry, Margaret E.; Page, William R.; Lehman, Thomas M.; Bohannon, Robert G.; Scott, Robert B.; Miggins, Daniel P.; Budahn, James R.; Cooper, Roger W.; Drenth, Benjamin J.; Anderson, Eric D.; Williams, Van S.

    2011-01-01

    The purpose of this map is to provide the National Park Service and the public with an updated digital geologic map of Big Bend National Park (BBNP). The geologic map report of Maxwell and others (1967) provides a fully comprehensive account of the important volcanic, structural, geomorphological, and paleontological features that define BBNP. However, the map is on a geographically distorted planimetric base and lacks topography, which has caused difficulty in conducting GIS-based data analyses and georeferencing the many geologic features investigated and depicted on the map. In addition, the map is outdated, excluding significant data from numerous studies that have been carried out since its publication more than 40 years ago. This report includes a modern digital geologic map that can be utilized with standard GIS applications to aid BBNP researchers in geologic data analysis, natural resource and ecosystem management, monitoring, assessment, inventory activities, and educational and recreational uses. The digital map incorporates new data, many revisions, and greater detail than the original map. Although some geologic issues remain unresolved for BBNP, the updated map serves as a foundation for addressing those issues. Funding for the Big Bend National Park geologic map was provided by the United States Geological Survey (USGS) National Cooperative Geologic Mapping Program and the National Park Service. The Big Bend mapping project was administered by staff in the USGS Geology and Environmental Change Science Center, Denver, Colo. Members of the USGS Mineral and Environmental Resources Science Center completed investigations in parallel with the geologic mapping project. Results of these investigations addressed some significant current issues in BBNP and the U.S.-Mexico border region, including contaminants and human health, ecosystems, and water resources. Funding for the high-resolution aeromagnetic survey in BBNP, and associated data analyses and interpretation, was from the USGS Crustal Geophysics and Geochemistry Science Center. Mapping contributed from university professors and students was mostly funded by independent sources, including academic institutions, private industry, and other agencies.

  13. Abraham Pais Prize for History of Physics Lecture: Big, Bigger, Too Big? From Los Alamos to Fermilab and the SSC

    NASA Astrophysics Data System (ADS)

    Hoddeson, Lillian

    2012-03-01

    The modern era of big science emerged during World War II. Oppenheimer's Los Alamos laboratory offered the quintessential model of a government-funded, mission-oriented facility directed by a strong charismatic leader. The postwar beneficiaries of this model included the increasingly ambitious large laboratories that participated in particle physics--in particular, Brookhaven, SLAC, and Fermilab. They carried the big science they practiced into a new realm where experiments eventually became as large and costly as entire laboratories had been. Meanwhile the available funding grew more limited causing the physics research to be concentrated into fewer and bigger experiments that appeared never to end. The next phase in American high-energy physics was the Superconducting Super Collider, the most costly pure physics project ever attempted. The SSC's termination was a tragedy for American science, but for historians it offers an opportunity to understand what made the success of earlier large high-energy physics laboratories possible, and what made the continuation of the SSC impossible. The most obvious reason for the SSC's failure was its enormous and escalating budget, which Congress would no longer support. Other factors need to be recognized however: no leader could be found with directing skills as strong as those of Wilson, Panofsky, Lederman, or Richter; the scale of the project subjected it to uncomfortable public and Congressional scrutiny; and the DOE's enforcement of management procedures of the military-industrial complex that clashed with those typical of the scientific community led to the alienation and withdrawal of many of the most creative scientists, and to the perception and the reality of poor management. These factors, exacerbated by negative pressure from scientists in other fields and a post-Cold War climate in which physicists had little of their earlier cultural prestige, discouraged efforts to gain international support. They made the SSC crucially different from its predecessors and sealed its doom.

  14. Energy Systems Integration Newsletter - December 2016 | Energy Systems

    Science.gov Websites

    in New NSF Big Data Project for Smart Grids NREL is participating in a National Science Foundation control features of modern wind turbine generators. Frew presented research on wholesale market design and industry research. Previous work was cited in numerous European studies, and ERGIS was mentioned as an

  15. Integration of bio- and geoscience data with the ODM2 standards and software ecosystem for the CZOData and BiG CZ Data projects

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.

    2015-12-01

    We have developed a family of solutions to the challenges of integrating diverse data from of biological and geological (BiG) disciplines for Critical Zone (CZ) science. These standards and software solutions have been developed around the new Observations Data Model version 2.0 (ODM2, http://ODM2.org), which was designed as a profile of the Open Geospatial Consortium's (OGC) Observations and Measurements (O&M) standard. The ODM2 standards and software ecosystem has at it's core an information model that balances specificity with flexibility to powerfully and equally serve the needs of multiple dataset types, from multivariate sensor-generated time series to geochemical measurements of specimen hierarchies to multi-dimensional spectral data to biodiversity observations. ODM2 has been adopted as the information model guiding the next generation of cyberinfrastructure development for the Interdisciplinary Earth Data Alliance (http://www.iedadata.org/) and the CUAHSI Water Data Center (https://www.cuahsi.org/wdc). Here we present several components of the ODM2 standards and software ecosystem that were developed specifically to help CZ scientists and their data managers to share and manage data through the national Critical Zone Observatory data integration project (CZOData, http://criticalzone.org/national/data/) and the bio integration with geo for critical zone science data project (BiG CZ Data, http://bigcz.org/). These include the ODM2 Controlled Vocabulary system (http://vocabulary.odm2.org), the YAML Observation Data Archive & exchange (YODA) File Format (https://github.com/ODM2/YODA-File) and the BiG CZ Toolbox, which will combine easy-to-install ODM2 databases (https://github.com/ODM2/ODM2) with a variety of graphical software packages for data management such as ODMTools (https://github.com/ODM2/ODMToolsPython) and the ODM2 Streaming Data Loader (https://github.com/ODM2/ODM2StreamingDataLoader).

  16. Solar-Terrestrial and Astronomical Research Network (STAR-Network) - A Meaningful Practice of New Cyberinfrastructure on Space Science

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zou, Z.

    2017-12-01

    For the next decades, comprehensive big data application environment is the dominant direction of cyberinfrastructure development on space science. To make the concept of such BIG cyberinfrastructure (e.g. Digital Space) a reality, these aspects of capability should be focused on and integrated, which includes science data system, digital space engine, big data application (tools and models) and the IT infrastructure. In the past few years, CAS Chinese Space Science Data Center (CSSDC) has made a helpful attempt in this direction. A cloud-enabled virtual research platform on space science, called Solar-Terrestrial and Astronomical Research Network (STAR-Network), has been developed to serve the full lifecycle of space science missions and research activities. It integrated a wide range of disciplinary and interdisciplinary resources, to provide science-problem-oriented data retrieval and query service, collaborative mission demonstration service, mission operation supporting service, space weather computing and Analysis service and other self-help service. This platform is supported by persistent infrastructure, including cloud storage, cloud computing, supercomputing and so on. Different variety of resource are interconnected: the science data can be displayed on the browser by visualization tools, the data analysis tools and physical models can be drived by the applicable science data, the computing results can be saved on the cloud, for example. So far, STAR-Network has served a series of space science mission in China, involving Strategic Pioneer Program on Space Science (this program has invested some space science satellite as DAMPE, HXMT, QUESS, and more satellite will be launched around 2020) and Meridian Space Weather Monitor Project. Scientists have obtained some new findings by using the science data from these missions with STAR-Network's contribution. We are confident that STAR-Network is an exciting practice of new cyberinfrastructure architecture on space science.

  17. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  18. Research funding. Big names or big ideas: do peer-review panels select the best science proposals?

    PubMed

    Li, Danielle; Agha, Leila

    2015-04-24

    This paper examines the success of peer-review panels in predicting the future quality of proposed research. We construct new data to track publication, citation, and patenting outcomes associated with more than 130,000 research project (R01) grants funded by the U.S. National Institutes of Health from 1980 to 2008. We find that better peer-review scores are consistently associated with better research outcomes and that this relationship persists even when we include detailed controls for an investigator's publication history, grant history, institutional affiliations, career stage, and degree types. A one-standard deviation worse peer-review score among awarded grants is associated with 15% fewer citations, 7% fewer publications, 19% fewer high-impact publications, and 14% fewer follow-on patents. Copyright © 2015, American Association for the Advancement of Science.

  19. ALCF Data Science Program: Productive Data-centric Supercomputing

    NASA Astrophysics Data System (ADS)

    Romero, Nichols; Vishwanath, Venkatram

    The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  20. Observatories, think tanks, and community models in the hydrologic and environmental sciences: How does it affect me?

    NASA Astrophysics Data System (ADS)

    Torgersen, Thomas

    2006-06-01

    Multiple issues in hydrologic and environmental sciences are now squarely in the public focus and require both government and scientific study. Two facts also emerge: (1) The new approach being touted publicly for advancing the hydrologic and environmental sciences is the establishment of community-operated "big science" (observatories, think tanks, community models, and data repositories). (2) There have been important changes in the business of science over the last 20 years that make it important for the hydrologic and environmental sciences to demonstrate the "value" of public investment in hydrological and environmental science. Given that community-operated big science (observatories, think tanks, community models, and data repositories) could become operational, I argue that such big science should not mean a reduction in the importance of single-investigator science. Rather, specific linkages between the large-scale, team-built, community-operated big science and the single investigator should provide context data, observatory data, and systems models for a continuing stream of hypotheses by discipline-based, specialized research and a strong rationale for continued, single-PI ("discovery-based") research. I also argue that big science can be managed to provide a better means of demonstrating the value of public investment in the hydrologic and environmental sciences. Decisions regarding policy will still be political, but big science could provide an integration of the best scientific understanding as a guide for the best policy.

  1. Big Science for Growing Minds: Constructivist Classrooms for Young Thinkers. Early Childhood Education Series

    ERIC Educational Resources Information Center

    Brooks, Jacqueline Grennon

    2011-01-01

    Strong evidence from recent brain research shows that the intentional teaching of science is crucial in early childhood. "Big Science for Growing Minds" describes a groundbreaking curriculum that invites readers to rethink science education through a set of unifying concepts or "big ideas." Using an integrated learning approach, the author shows…

  2. Earth Observing Data System Data and Information System (EOSDIS) Overview

    NASA Technical Reports Server (NTRS)

    Klene, Stephan

    2016-01-01

    The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. The NASA Big Earth Data Initiative (BEDI) is an effort to make the acquired science data more discoverable, accessible, and usable. This presentation will provide a brief introduction to the Earth Observing System Data and Information System (EOSDIS) project and the nature of advances that have been made by BEDI to other Federal Users.

  3. From big data analysis in the cloud to robotic pot drumming: tales from the Met Office Informatics Lab

    NASA Astrophysics Data System (ADS)

    Robinson, Niall; Tomlinson, Jacob; Prudden, Rachel; Hilson, Alex; Arribas, Alberto

    2017-04-01

    The Met Office Informatics Lab is a small multidisciplinary team which sits between science, technology and design. Our mission is simply "to make Met Office data useful" - a deliberately broad objective. Our prototypes often trial cutting edge technologies, and so far have included projects such as virtual reality data visualisation in the web browser, bots and natural language interfaces, and artificially intelligent weather warnings. In this talk we focus on our latest project, Jade, a big data analysis platform in the cloud. It is a powerful, flexible and simple to use implementation which makes extensive use of technologies such as Jupyter, Dask, containerisation, Infrastructure as Code, and auto-scaling. Crucially, Jade is flexible enough to be used for a diverse set of applications: it can present weather forecast information to meteorologists and allow climate scientists to analyse big data sets, but it is also effective for analysing non-geospatial data. As well as making data useful, the Informatics Lab also trials new working practises. In this presentation, we will talk about our experience of making a group like the Lab successful.

  4. Toward a manifesto for the 'public understanding of big data'.

    PubMed

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  5. Application of Ontologies for Big Earth Data

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.

    2014-12-01

    Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know: creating a knowledge-driven research infrastructure. The Fourth Paradigm 2009: 165-172

  6. Air Toxics Under the Big Sky: Examining the Effectiveness of Authentic Scientific Research on High School Students' Science Skills and Interest.

    PubMed

    Ward, Tony J; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. A quasi-experimental design was used in order to understand: 1) how the program affects student understanding of scientific inquiry and research and 2) how the open inquiry learning opportunities provided by the program increase student interest in science as a career path . Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom.

  7. Air Toxics Under the Big Sky: examining the effectiveness of authentic scientific research on high school students' science skills and interest

    NASA Astrophysics Data System (ADS)

    Ward, Tony J.; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-04-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. This research explored: (1) how the program affects student understanding of scientific inquiry and research and (2) how the open-inquiry learning opportunities provided by the program increase student interest in science as a career path. Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom.

  8. Air Toxics Under the Big Sky: Examining the Effectiveness of Authentic Scientific Research on High School Students’ Science Skills and Interest

    PubMed Central

    Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. A quasi-experimental design was used in order to understand: 1) how the program affects student understanding of scientific inquiry and research and 2) how the open inquiry learning opportunities provided by the program increase student interest in science as a career path. Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom. PMID:28286375

  9. Global Oscillation Network Group

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    The Global Oscillation Network Group (GONG) is an international, community-based project, operated by the NATIONAL SOLAR OBSERVATORY for the US National Science Foundation, to conduct a detailed study of the internal structure and dynamics of the Sun over an 11 year solar cycle using helioseismology. 10 242 velocity images are obtained by a six-station network located at Big Bear Solar Observato...

  10. Mario Bunge: Physicist and Philosopher

    NASA Astrophysics Data System (ADS)

    Matthews, Michael R.

    Mario Bunge was born in Argentina in the final year of the First World War.He learnt atomic physics and quantum mechanics from an Austrian refugee who had been a student of Heisenberg. Additionally he taught himself modern philosophy in an environment that was a philosophical backwater. He was the first South American philosopher of science to be trained in science. His publications in physics, philosophy, psychology, sociology and the foundations of biology, are staggering in number, and include a massive 8-volume Treatise on Philosophy. The unifying thread of his scholarship is the constant and vigorous advancement of the Enlightenment Project, and criticism of cultural and academic movements that deny or devalue the core planks of the project: namely its naturalism, the search for truth, the universality of science, rationality, and respect for individuals. At a time when specialisation is widely decried, and its deleterious effects on science, philosophy of science, educational research and science teaching are recognised - it is salutary to see the fruits of one person's pursuit of the Big'' scientific and philosophical picture.

  11. Personalized medicine beyond genomics: alternative futures in big data-proteomics, environtome and the social proteome.

    PubMed

    Özdemir, Vural; Dove, Edward S; Gürsoy, Ulvi K; Şardaş, Semra; Yıldırım, Arif; Yılmaz, Şenay Görücü; Ömer Barlas, I; Güngör, Kıvanç; Mete, Alper; Srivastava, Sanjeeva

    2017-01-01

    No field in science and medicine today remains untouched by Big Data, and psychiatry is no exception. Proteomics is a Big Data technology and a next generation biomarker, supporting novel system diagnostics and therapeutics in psychiatry. Proteomics technology is, in fact, much older than genomics and dates to the 1970s, well before the launch of the international Human Genome Project. While the genome has long been framed as the master or "elite" executive molecule in cell biology, the proteome by contrast is humble. Yet the proteome is critical for life-it ensures the daily functioning of cells and whole organisms. In short, proteins are the blue-collar workers of biology, the down-to-earth molecules that we cannot live without. Since 2010, proteomics has found renewed meaning and international attention with the launch of the Human Proteome Project and the growing interest in Big Data technologies such as proteomics. This article presents an interdisciplinary technology foresight analysis and conceptualizes the terms "environtome" and "social proteome". We define "environtome" as the entire complement of elements external to the human host, from microbiome, ambient temperature and weather conditions to government innovation policies, stock market dynamics, human values, political power and social norms that collectively shape the human host spatially and temporally. The "social proteome" is the subset of the environtome that influences the transition of proteomics technology to innovative applications in society. The social proteome encompasses, for example, new reimbursement schemes and business innovation models for proteomics diagnostics that depart from the "once-a-life-time" genotypic tests and the anticipated hype attendant to context and time sensitive proteomics tests. Building on the "nesting principle" for governance of complex systems as discussed by Elinor Ostrom, we propose here a 3-tiered organizational architecture for Big Data science such as proteomics. The proposed nested governance structure is comprised of (a) scientists, (b) ethicists, and (c) scholars in the nascent field of "ethics-of-ethics", and aims to cultivate a robust social proteome for personalized medicine. Ostrom often noted that such nested governance designs offer assurance that political power embedded in innovation processes is distributed evenly and is not concentrated disproportionately in a single overbearing stakeholder or person. We agree with this assessment and conclude by underscoring the synergistic value of social and biological proteomes to realize the full potentials of proteomics science for personalized medicine in psychiatry in the present era of Big Data.

  12. White House announces “big data” initiative

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2012-04-01

    The world is now generating zetabytes—which is 10 to the 21st power, or a billion trillion bytess—of information every year, according to John Holdren, director of the White House Office of Science and Technology Policy. With data volumes growing exponentially from a variety of sources such as computers running large-scale models, scientific instruments including telescopes and particle accelerators, and even online retail transactions, a key challenge is to better manage and utilize the data. The Big Data Research and Development Initiative, launched by the White House at a 29 March briefing, initially includes six federal departments and agencies providing more than $200 million in new commitments to improve tools and techniques for better accessing, organizing, and using data for scientific advances. The agencies and departments include the National Science Foundation (NSF), Department of Energy, U.S. Geological Survey (USGS), National Institutes of Health (NIH), Department of Defense, and Defense Advanced Research Projects Agency.

  13. Facilitymetrics for Big Ocean Science: Towards Improved Measurement of Scientific Impact

    NASA Astrophysics Data System (ADS)

    Juniper, K.; Owens, D.; Moran, K.; Pirenne, B.; Hallonsten, O.; Matthews, K.

    2016-12-01

    Cabled ocean observatories are examples of "Big Science" facilities requiring significant public investments for installation and ongoing maintenance. Large observatory networks in Canada and the United States, for example, have been established after extensive up-front planning and hundreds of millions of dollars in start-up costs. As such, they are analogous to particle accelerators and astronomical observatories, which may often be required to compete for public funding in an environment of ever-tightening national science budget allocations. Additionally, the globalization of Big Science compels these facilities to respond to increasing demands for demonstrable productivity, excellence and competitiveness. How should public expenditures on "Big Science" facilities be evaluated and justified in terms of benefits to the countries that invest in them? Published literature counts are one quantitative measure often highlighted in the annual reports of large science facilities. But, as recent research has demonstrated, publication counts can lead to distorted characterizations of scientific impact, inviting evaluators to calculate scientific outputs in terms of costs per publication—a ratio that can be simplistically misconstrued to conclude Big Science is wildly expensive. Other commonly promoted measurements of Big Science facilities include technical reliability (a.k.a. uptime), provision of training opportunities for Highly Qualified Personnel, generation of commercialization opportunities, and so forth. "Facilitymetrics" is a new empirical focus for scientometrical studies, which has been applied to the evaluation and comparison of synchrotron facilities. This paper extends that quantitative and qualitative examination to a broader inter-disciplinary comparison of Big Science facilities in the ocean science realm to established facilities in the fields of astronomy and particle physics.

  14. Facilitymetrics for Big Ocean Science: Towards Improved Measurement of Scientific Impact

    NASA Astrophysics Data System (ADS)

    Juniper, K.; Owens, D.; Moran, K.; Pirenne, B.; Hallonsten, O.; Matthews, K.

    2016-02-01

    Cabled ocean observatories are examples of "Big Science" facilities requiring significant public investments for installation and ongoing maintenance. Large observatory networks in Canada and the United States, for example, have been established after extensive up-front planning and hundreds of millions of dollars in start-up costs. As such, they are analogous to particle accelerators and astronomical observatories, which may often be required to compete for public funding in an environment of ever-tightening national science budget allocations. Additionally, the globalization of Big Science compels these facilities to respond to increasing demands for demonstrable productivity, excellence and competitiveness. How should public expenditures on "Big Science" facilities be evaluated and justified in terms of benefits to the countries that invest in them? Published literature counts are one quantitative measure often highlighted in the annual reports of large science facilities. But, as recent research has demonstrated, publication counts can lead to distorted characterizations of scientific impact, inviting evaluators to calculate scientific outputs in terms of costs per publication—a ratio that can be simplistically misconstrued to conclude Big Science is wildly expensive. Other commonly promoted measurements of Big Science facilities include technical reliability (a.k.a. uptime), provision of training opportunities for Highly Qualified Personnel, generation of commercialization opportunities, and so forth. "Facilitymetrics" is a new empirical focus for scientometrical studies, which has been applied to the evaluation and comparison of synchrotron facilities. This paper extends that quantitative and qualitative examination to a broader inter-disciplinary comparison of Big Science facilities in the ocean science realm to established facilities in the fields of astronomy and particle physics.

  15. Urgent Call for Nursing Big Data.

    PubMed

    Delaney, Connie W

    2016-01-01

    The purpose of this panel is to expand internationally a National Action Plan for sharable and comparable nursing data for quality improvement and big data science. There is an urgent need to assure that nursing has sharable and comparable data for quality improvement and big data science. A national collaborative - Nursing Knowledge and Big Data Science includes multi-stakeholder groups focused on a National Action Plan toward implementing and using sharable and comparable nursing big data. Panelists will share accomplishments and future plans with an eye toward international collaboration. This presentation is suitable for any audience attending the NI2016 conference.

  16. Big data need big theory too

    PubMed Central

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  17. Big data need big theory too.

    PubMed

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  18. Answering the big questions in neuroscience: DoD's experimental research wing takes on massive, high-risk projects.

    PubMed

    Mertz, Leslie

    2012-01-01

    When the Defense Advanced Research Projects Agency (DARPA) asks research questions, it goes big. This is, after all, the same agency that put together teams of scientists and engineers to find a way to connect the worlds computers and, in doing so, developed the precursor to the Internet. DARPA, the experimental research wing of the U.S. Department of Defense, funds the types of research queries that scientists and engineers dream of tackling. Unlike a traditional granting agency that conservatively metes out its funding and only to projects with a good chance of success, DARPA puts its money on massive, multi-institutional projects that have no guarantees, but have enormous potential. In the 1990s, DARPA began its biological and medical science research to improve the safety, health, and well being of military personnel, according to DARPA program manager and Army Colonel Geoffrey Ling, Ph.D., M.D. More recently, DARPA has entered the realm of neuroscience and neurotechnology. Its focus with these projects is on its prime customer, the U.S. Department of Defense, but Ling acknowledged that technologies developed in its programs "certainly have potential to cascade into civilian uses."

  19. The Disappearing Fourth Wall: John Marburger, Science Policy, and the SSC

    NASA Astrophysics Data System (ADS)

    Crease, Robert

    2015-04-01

    John H. Marburger (1941-2011) was a skilled science administrator who had a fresh and unique approach to science policy and science leadership. His posthumously published book Science Policy up Close contains recollections of key science policy episodes in which he participated or observed closely. One was the administration of the Superconducting Supercollider (SSC); Marburger was Chairman of the Universities Research Association, the group charged with managing the SSC, from 1988-1994. Many accounts of the SSC saga attribute its demise to a combination of transitory factors: poor management, rising cost estimates, the collapse of the Soviet Union and thus of the Cold War threat, complaints by ``small science'' that the SSC's ``big science'' was consuming their budget, Congress's desire to cut spending, unwarranted contract regulations imposed by the Department of Energy (DOE) in response to environmental lapses at nuclear weapons laboratories, and so forth. Marburger tells a subtler story whose implications for science policy are more significant and far-reaching. The story involves changes in the attitude of the government towards large scientific projects that reach back to management reforms introduced by the administration of Presidents Johnson, Nixon, and Carter in the 1960s and 1970s. This experience impressed Marburger with the inevitability of public oversight of large scientific projects, and with the need for planners of such projects to establish and make public a cost and schedule tracking system that would model the project's progress and expenditures.

  20. A Big Data Guide to Understanding Climate Change: The Case for Theory-Guided Data Science.

    PubMed

    Faghmous, James H; Kumar, Vipin

    2014-09-01

    Global climate change and its impact on human life has become one of our era's greatest challenges. Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data. This is a stark contrast from other fields such as advertising or electronic commerce where big data has been a great success story. This discrepancy stems from the complex nature of climate data as well as the scientific questions climate science brings forth. This article introduces a data science audience to the challenges and opportunities to mine large climate datasets, with an emphasis on the nuanced difference between mining climate data and traditional big data approaches. We focus on data, methods, and application challenges that must be addressed in order for big data to fulfill their promise with regard to climate science applications. More importantly, we highlight research showing that solely relying on traditional big data techniques results in dubious findings, and we instead propose a theory-guided data science paradigm that uses scientific theory to constrain both the big data techniques as well as the results-interpretation process to extract accurate insight from large climate data .

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, M.J.; Jenkins, S.

    Project JEM (Jarvis Enhancement of Males) is a pre-college program directed toward stimulating disadvantaged, talented African American males in grades four, five, and six to attend college and major in mathematics, science, computer science, or related technical areas needed by the US Department of Energy. Twenty young African American male students were recruited from Gladewater Independent School District (ISD), Longview ISD, Hawkins ISD, Tyler ISD, Winona ISD and big Sandy ISD. Students enrolled in the program range from ages 10 to 13 and are in grades four, five and six. Student participants in the 1997 Project JEM Program attended Saturdaymore » Academy sessions and a four week intensive, summer residential program. The information here provides a synopsis of the activities which were conducted through each program component.« less

  2. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  3. The Lessons Oscar Taught Us: Data Science and Media & Entertainment.

    PubMed

    Gold, Michael; McClarren, Ryan; Gaughan, Conor

    2013-06-01

    Farsite Group, a data science firm based in Columbus, Ohio, launched a highly visible campaign in early 2013 to use predictive analytics to forecast the winners of the 85th Annual Academy Awards. The initiative was fun and exciting for the millions of Oscar viewers, but it also illustrated how data science could be further deployed in the media and entertainment industries. This article explores the current and potential use cases for big data and predictive analytics in those industries. It further discusses how the Farsite Forecast was built, as well as how the model was iterated, how the projections performed, and what lessons were learned in the process.

  4. Energy and scientific communication

    NASA Astrophysics Data System (ADS)

    De Sanctis, E.

    2013-06-01

    Energy communication is a paradigmatic case of scientific communication. It is particularly important today, when the world is confronted with a number of immediate, urgent problems. Science communication has become a real duty and a big challenge for scientists. It serves to create and foster a climate of reciprocal knowledge and trust between science and society, and to establish a good level of interest and enthusiasm for research. For an effective communication it is important to establish an open dialogue with the audience, and a close collaboration among scientists and science communicators. An international collaboration in energy communication is appropriate to better support international and interdisciplinary research and projects.

  5. From big data to deep insight in developmental science.

    PubMed

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  6. Engaging High School Youth in Paleobiology Research

    NASA Astrophysics Data System (ADS)

    Saltzman, J.; Heim, N. A.; Payne, J.

    2013-12-01

    The chasm between classroom science and scientific research is bridged by the History of Life Internships at Stanford University. Nineteen interns recorded more than 25,500 linear body size measurements of fossil echinoderms and ostracods spanning more than 11,000 species. The interns were selected from a large pool of applicants, and well-established relationships with local teachers at schools serving underrepresented groups in STEM fields were leveraged to ensure a diverse mix of applicants. The lead investigator has been hosting interns in his research group for seven years, in the process measuring over 36,000 foraminfera species as well as representatives from many other fossil groups. We (faculty member, researcher, and educators) all find this very valuable to engage youth in novel research projects. We are able to create an environment where high school students can make genuine contributions to jmportant and unsolved scientific problems, not only through data collection but also through original data analysis. Science often involves long intervals of data collection, which can be tedious, and big questions often require big datasets. Body size evolution is ideally suited to this type of program, as the data collection process requires substantial person-power but not deep technical expertise or expensive equipment. Students are therefore able to engage in the full scientific process, posing previously unanswered questions regarding the evolution of animal size, compiling relevant data, and then analyzing the data in order to test their hypotheses. Some of the projects students developed were truly creative and fun to see come together. Communicating is a critical step in science yet is often lost in the science classroom. The interns submitted seven abstracts to this meeting for the youth session entitled Bright STaRS based on their research projects. To round out the experience, students also learn about the broad field of earth sciences through traditional lectures, active learning exercises, discussions of primary and secondary literature, guest speakers, lab tours and field trips (including to the UC Museum of Paleontology, Hayward fault, fossiliferous Pliocene outcrops, and tidepools). We will use a survey to assess the impact of the History of Life Internships on participant attitudes toward science and careers in science.

  7. Perspectives on Policy and the Value of Nursing Science in a Big Data Era.

    PubMed

    Gephart, Sheila M; Davis, Mary; Shea, Kimberly

    2018-01-01

    As data volume explodes, nurse scientists grapple with ways to adapt to the big data movement without jeopardizing its epistemic values and theoretical focus that celebrate while acknowledging the authority and unity of its body of knowledge. In this article, the authors describe big data and emphasize ways that nursing science brings value to its study. Collective nursing voices that call for more nursing engagement in the big data era are answered with ways to adapt and integrate theoretical and domain expertise from nursing into data science.

  8. Big Data Analytics for Disaster Preparedness and Response of Mobile Communication Infrastructure during Natural Hazards

    NASA Astrophysics Data System (ADS)

    Zhong, L.; Takano, K.; Ji, Y.; Yamada, S.

    2015-12-01

    The disruption of telecommunications is one of the most critical disasters during natural hazards. As the rapid expanding of mobile communications, the mobile communication infrastructure plays a very fundamental role in the disaster response and recovery activities. For this reason, its disruption will lead to loss of life and property, due to information delays and errors. Therefore, disaster preparedness and response of mobile communication infrastructure itself is quite important. In many cases of experienced disasters, the disruption of mobile communication networks is usually caused by the network congestion and afterward long-term power outage. In order to reduce this disruption, the knowledge of communication demands during disasters is necessary. And big data analytics will provide a very promising way to predict the communication demands by analyzing the big amount of operational data of mobile users in a large-scale mobile network. Under the US-Japan collaborative project on 'Big Data and Disaster Research (BDD)' supported by the Japan Science and Technology Agency (JST) and National Science Foundation (NSF), we are going to investigate the application of big data techniques in the disaster preparedness and response of mobile communication infrastructure. Specifically, in this research, we have considered to exploit the big amount of operational information of mobile users for predicting the communications needs in different time and locations. By incorporating with other data such as shake distribution of an estimated major earthquake and the power outage map, we are able to provide the prediction information of stranded people who are difficult to confirm safety or ask for help due to network disruption. In addition, this result could further facilitate the network operators to assess the vulnerability of their infrastructure and make suitable decision for the disaster preparedness and response. In this presentation, we are going to introduce the results we obtained based on the big data analytics of mobile user statistical information and discuss the implications of these results.

  9. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  10. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    PubMed

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  11. The Great War as a Crucial Point in the History of Russian Science and Technology.

    PubMed

    Saprykin, Dmitry L

    2016-01-01

    The paper is devoted to one of the most important and, at the same time, relatively unexplored phases in the history of Russian science and technology. The Great War coincided with the beginning of a heyday in science, engineering education, and technology in Russia. It was precisely the time in which Russia's era of "Big Science" was emer- ging. Many Russian and Soviet technical projects and scientific schools were rooted in the time of the Great War. The "engineerization" of science and a "physical-technical" way of thinking had already begun before the war. But it was precisely the war which encouraged a large proportion of the Russian academic community to take part in industrial projects. Academics also played a significant role in developing concepts and implementing strategic plans during the Great War. This article also discusses how the organization of science and the academic community was transformed during, and after, the Great War. And it looks at the impact that war had on Russia's participation in the international scientific community.

  12. The Promise and Potential Perils of Big Data for Advancing Symptom Management Research in Populations at Risk for Health Disparities.

    PubMed

    Bakken, Suzanne; Reame, Nancy

    2016-01-01

    Symptom management research is a core area of nursing science and one of the priorities for the National Institute of Nursing Research, which specifically focuses on understanding the biological and behavioral aspects of symptoms such as pain and fatigue, with the goal of developing new knowledge and new strategies for improving patient health and quality of life. The types and volume of data related to the symptom experience, symptom management strategies, and outcomes are increasingly accessible for research. Traditional data streams are now complemented by consumer-generated (i.e., quantified self) and "omic" data streams. Thus, the data available for symptom science can be considered big data. The purposes of this chapter are to (a) briefly summarize the current drivers for the use of big data in research; (b) describe the promise of big data and associated data science methods for advancing symptom management research; (c) explicate the potential perils of big data and data science from the perspective of the ethical principles of autonomy, beneficence, and justice; and (d) illustrate strategies for balancing the promise and the perils of big data through a case study of a community at high risk for health disparities. Big data and associated data science methods offer the promise of multidimensional data sources and new methods to address significant research gaps in symptom management. If nurse scientists wish to apply big data and data science methods to advance symptom management research and promote health equity, they must carefully consider both the promise and perils.

  13. Earth Science Data Fusion with Event Building Approach

    NASA Technical Reports Server (NTRS)

    Lukashin, C.; Bartle, Ar.; Callaway, E.; Gyurjyan, V.; Mancilla, S.; Oyarzun, R.; Vakhnin, A.

    2015-01-01

    Objectives of the NASA Information And Data System (NAIADS) project are to develop a prototype of a conceptually new middleware framework to modernize and significantly improve efficiency of the Earth Science data fusion, big data processing and analytics. The key components of the NAIADS include: Service Oriented Architecture (SOA) multi-lingual framework, multi-sensor coincident data Predictor, fast into-memory data Staging, multi-sensor data-Event Builder, complete data-Event streaming (a work flow with minimized IO), on-line data processing control and analytics services. The NAIADS project is leveraging CLARA framework, developed in Jefferson Lab, and integrated with the ZeroMQ messaging library. The science services are prototyped and incorporated into the system. Merging the SCIAMACHY Level-1 observations and MODIS/Terra Level-2 (Clouds and Aerosols) data products, and ECMWF re- analysis will be used for NAIADS demonstration and performance tests in compute Cloud and Cluster environments.

  14. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  15. Big Data in Plant Science: Resources and Data Mining Tools for Plant Genomics and Proteomics.

    PubMed

    Popescu, George V; Noutsos, Christos; Popescu, Sorina C

    2016-01-01

    In modern plant biology, progress is increasingly defined by the scientists' ability to gather and analyze data sets of high volume and complexity, otherwise known as "big data". Arguably, the largest increase in the volume of plant data sets over the last decade is a consequence of the application of the next-generation sequencing and mass-spectrometry technologies to the study of experimental model and crop plants. The increase in quantity and complexity of biological data brings challenges, mostly associated with data acquisition, processing, and sharing within the scientific community. Nonetheless, big data in plant science create unique opportunities in advancing our understanding of complex biological processes at a level of accuracy without precedence, and establish a base for the plant systems biology. In this chapter, we summarize the major drivers of big data in plant science and big data initiatives in life sciences with a focus on the scope and impact of iPlant, a representative cyberinfrastructure platform for plant science.

  16. From Big Data to Knowledge in the Social Sciences.

    PubMed

    Hesse, Bradford W; Moser, Richard P; Riley, William T

    2015-05-01

    One of the challenges associated with high-volume, diverse datasets is whether synthesis of open data streams can translate into actionable knowledge. Recognizing that challenge and other issues related to these types of data, the National Institutes of Health developed the Big Data to Knowledge or BD2K initiative. The concept of translating "big data to knowledge" is important to the social and behavioral sciences in several respects. First, a general shift to data-intensive science will exert an influence on all scientific disciplines, but particularly on the behavioral and social sciences given the wealth of behavior and related constructs captured by big data sources. Second, science is itself a social enterprise; by applying principles from the social sciences to the conduct of research, it should be possible to ameliorate some of the systemic problems that plague the scientific enterprise in the age of big data. We explore the feasibility of recalibrating the basic mechanisms of the scientific enterprise so that they are more transparent and cumulative; more integrative and cohesive; and more rapid, relevant, and responsive.

  17. From Big Data to Knowledge in the Social Sciences

    PubMed Central

    Hesse, Bradford W.; Moser, Richard P.; Riley, William T.

    2015-01-01

    One of the challenges associated with high-volume, diverse datasets is whether synthesis of open data streams can translate into actionable knowledge. Recognizing that challenge and other issues related to these types of data, the National Institutes of Health developed the Big Data to Knowledge or BD2K initiative. The concept of translating “big data to knowledge” is important to the social and behavioral sciences in several respects. First, a general shift to data-intensive science will exert an influence on all scientific disciplines, but particularly on the behavioral and social sciences given the wealth of behavior and related constructs captured by big data sources. Second, science is itself a social enterprise; by applying principles from the social sciences to the conduct of research, it should be possible to ameliorate some of the systemic problems that plague the scientific enterprise in the age of big data. We explore the feasibility of recalibrating the basic mechanisms of the scientific enterprise so that they are more transparent and cumulative; more integrative and cohesive; and more rapid, relevant, and responsive. PMID:26294799

  18. The 'end of AIDS' project: Mobilising evidence, bureaucracy, and big data for a final biomedical triumph over AIDS.

    PubMed

    Leclerc-Madlala, Suzanne; Broomhall, Lorie; Fieno, John

    2017-12-04

    Efforts are currently underway by major orchestrators and funders of the global AIDS response to realise the vision of achieving an end to AIDS by 2030. Unlike previous efforts to provide policy guidance or to encourage 'best practice' approaches for combatting AIDS, the end of AIDS project involves the promotion of a clear set of targets, tools, and interventions for a final biomedical solution to the epidemic. In this paper, we examine the bureaucratic procedures of one major AIDS funder that helped to foster a common vision and mission amongst a global AIDS community with widely divergent views on how best to address the epidemic. We focus on the methods, movements, and materials that are central to the project of ending AIDS, including those related to biomedical forms of evidence and big data science. We argue that these approaches have limitations and social scientists need to pay close attention to the end of AIDS project, particularly in contexts where clinical interventions might transform clinical outcomes, but where the social, economic, and cultural determinants of HIV and AIDS remain largely intact and increasingly obscured.

  19. A Big Data Guide to Understanding Climate Change: The Case for Theory-Guided Data Science

    PubMed Central

    Kumar, Vipin

    2014-01-01

    Abstract Global climate change and its impact on human life has become one of our era's greatest challenges. Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data. This is a stark contrast from other fields such as advertising or electronic commerce where big data has been a great success story. This discrepancy stems from the complex nature of climate data as well as the scientific questions climate science brings forth. This article introduces a data science audience to the challenges and opportunities to mine large climate datasets, with an emphasis on the nuanced difference between mining climate data and traditional big data approaches. We focus on data, methods, and application challenges that must be addressed in order for big data to fulfill their promise with regard to climate science applications. More importantly, we highlight research showing that solely relying on traditional big data techniques results in dubious findings, and we instead propose a theory-guided data science paradigm that uses scientific theory to constrain both the big data techniques as well as the results-interpretation process to extract accurate insight from large climate data. PMID:25276499

  20. Presenting the 'Big Ideas' of Science: Earth Science Examples.

    ERIC Educational Resources Information Center

    King, Chris

    2001-01-01

    Details an 'explanatory Earth story' on plate tectonics to show how such a 'story' can be developed in an earth science context. Presents five other stories in outline form. Explains the use of these stories as vehicles to present the big ideas of science. (DDR)

  1. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Space research - At a crossroads

    NASA Technical Reports Server (NTRS)

    Mcdonald, Frank B.

    1987-01-01

    Efforts which must be expended if U.S. space research is to regain vitality in the next few years are discussed. Small-scale programs are the cornerstone for big science projects, giving both researchers and students a chance to practice the development of space missions and hardware and identify promising goals for larger projects. Small projects can be carried aloft by balloons, sounding rockets, the Shuttle and ELVs. It is recommended that NASA continue the development of remote sensing systems, and join with other government agencies to fund space-based materials science, space biology and medical research. Increased international cooperation in space projects is necessary for affording moderate to large scale missions, for political reasons, and to maximize available space resources. Finally, the establishment and funding of long-range goals in space, particularly the development of the infrastructure and technologies for the exploration and colonization of the planets, must be viewed as the normal outgrowth of the capabilities being developed for LEO operations.

  3. 'Big data' in pharmaceutical science: challenges and opportunities.

    PubMed

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  4. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  5. Machine Learning in the Big Data Era: Are We There Yet?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas Rangan

    In this paper, we discuss the machine learning challenges of the Big Data era. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical machine learning under more scrutiny and evaluation for gleaning insights from the data than ever before. In that context, we pose and debate the question - Are machine learning algorithms scaling with the ability to store and compute? If yes, how? If not, why not? We survey recent developments in the state-of-the-art to discuss emerging and outstandingmore » challenges in the design and implementation of machine learning algorithms at scale. We leverage experience from real-world Big Data knowledge discovery projects across domains of national security and healthcare to suggest our efforts be focused along the following axes: (i) the data science challenge - designing scalable and flexible computational architectures for machine learning (beyond just data-retrieval); (ii) the science of data challenge the ability to understand characteristics of data before applying machine learning algorithms and tools; and (iii) the scalable predictive functions challenge the ability to construct, learn and infer with increasing sample size, dimensionality, and categories of labels. We conclude with a discussion of opportunities and directions for future research.« less

  6. A brief philosophical encounter with science and medicine.

    PubMed

    Karbasizadeh, Amir Ehsan

    2013-08-01

    We show a lot of respect for science today. To back up our claims, we tend to appeal to scientific methods. It seems that we all agree that these methods are effective for gaining the truth. We can ask why science has its special status as a supplier of knowledge about our external world and our bodies. Of course, one should not always trust what scientists say. Nonetheless, epistemological justification of scientific claims is really a big project for philosophers of science. Philosophers of science are interested in knowing how science proves what it does claim and why it gives us good reasons to take these claims seriously. These questions are epistemological questions. Epistemology is a branch of philosophy which deals with knowledge claims and justification. Besides epistemological questions, metaphysical and ethical issues in science are worthy of philosophical scrutiny. This paper gives a short survey of these intellectually demanding issues.

  7. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    PubMed Central

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  8. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives. Contribution of the IMIA Social Media Working Group.

    PubMed

    Hansen, M M; Miron-Shatz, T; Lau, A Y S; Paton, C

    2014-08-15

    As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to "small data" would also be useful.

  9. The Ethics of Big Data and Nursing Science.

    PubMed

    Milton, Constance L

    2017-10-01

    Big data is a scientific, social, and technological trend referring to the process and size of datasets available for analysis. Ethical implications arise as healthcare disciplines, including nursing, struggle over questions of informed consent, privacy, ownership of data, and its possible use in epistemology. The author offers straight-thinking possibilities for the use of big data in nursing science.

  10. Under One Big Sky: Elementary Pre-Service Teachers Use Inquiry to Learn about the Moon, Construct Knowledge, and Teach Elementary Students around the World via the Internet

    ERIC Educational Resources Information Center

    Lee, Luann Christensen

    2011-01-01

    This study examined the content knowledge and pedagogical content knowledge (PCK) constructed by a group of 24 pre-service elementary teacher participants as they learned about the moon's phases, inquiry learning, and use of the Internet message boards as a teaching tool as a part of their science teaching methods course. The MOON Project (More…

  11. 1989”1990 AGU Congressional Fellow report

    NASA Astrophysics Data System (ADS)

    Frank, Barbara J.

    Describing the last 3 months on the Subcommittee on International Scientific Cooperation of the U.S. House of Representatives Committee on Science, Space, and Technology is no easy task. I have learned a great deal about many issues and about the workings of Congress; yet this knowledge has not been gained in a necessarily straightforward or logical manner.Although my status on the Subcommittee is that of a Fellow, in effect I am expected to function as a regular staff member. I immediately became involved in the preparation of two hearings, the first on science and technology initiatives for Poland and Hungary, and the second on the Human Genome Project. At these hearings, I learned firsthand about important aspects of science-related issues that concern Congress, namely, intellectual property rights, U.S. competitiveness in the science and technology arena with other countries, Japan, in particular; and big science versus small science funding.

  12. Biosecurity in the age of Big Data: a conversation with the FBI

    PubMed Central

    Kozminski, Keith G.

    2015-01-01

    New scientific frontiers and emerging technologies within the life sciences pose many global challenges to society. Big Data is a premier example, especially with respect to individual, national, and international security. Here a Special Agent of the Federal Bureau of Investigation discusses the security implications of Big Data and the need for security in the life sciences. PMID:26543195

  13. The role of administrative data in the big data revolution in social science research.

    PubMed

    Connelly, Roxanne; Playford, Christopher J; Gayle, Vernon; Dibben, Chris

    2016-09-01

    The term big data is currently a buzzword in social science, however its precise meaning is ambiguous. In this paper we focus on administrative data which is a distinctive form of big data. Exciting new opportunities for social science research will be afforded by new administrative data resources, but these are currently under appreciated by the research community. The central aim of this paper is to discuss the challenges associated with administrative data. We emphasise that it is critical for researchers to carefully consider how administrative data has been produced. We conclude that administrative datasets have the potential to contribute to the development of high-quality and impactful social science research, and should not be overlooked in the emerging field of big data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Big Data in Health: a Literature Review from the Year 2005.

    PubMed

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  15. Negotiating the Inclusion of Nanoscience Content and Technology in Science Curriculum: An Examination of Secondary Teachers' Thinking in a Professional Development Project

    NASA Astrophysics Data System (ADS)

    Wells, Jennifer Gayle

    The Next Generation Science Standards represent a significant challenge for K--12 school reform in the United States in the science, technology, engineering and mathematics (STEM) disciplines (NSTA, 2012). One important difference between the National Science Education Standards (NRC, 1996) and the Next Generation Science Standards (Achieve, 2013) is the more extensive inclusion of nanoscale science and technology. Teacher PD is a key vehicle for implementing this STEM education reform effort (NRC, 2012; Smith, 2001). The context of this dissertation study is Project Nanoscience and Nanotechnology Outreach (NANO), a secondary level professional development program for teachers that provides a summer workshop, academic year coaching and the opportunity for teacher participants to borrow a table-top Phenom scanning electron microscope and a research grade optical microscope for use in their classrooms. This designed-based descriptive case study examined the thinking of secondary teachers in the 2012 Project NANO cohort as they negotiated the inclusion of novel science concepts and technology into secondary science curriculum. Teachers in the Project NANO 2012 summer workshop developed a two-week, inquiry-based unit of instruction drawing upon one or more of nine big ideas in nanoscale science and technology as defined by Stevens, Sutherland, and Krajcik (2011). This research examined teacher participants' metastrategic thinking (Zohar, 2006) which they used to inform their pedagogical content knowledge (Shulman, 1987) by focusing on the content knowledge teachers chose to frame their lessons, their rationales for such choices as well as the teaching strategies that they chose to employ in their Project NANO unit of instruction. The study documents teachers various entry points on a learning progression as teachers negotiated the inclusion of nanoscale science and technology into the curriculum for the first time. Implications and recommendations for teacher professional development are offered.

  16. Partnering for science: proceedings of the USGS Workshop on Citizen Science

    USGS Publications Warehouse

    Hines, Megan; Benson, Abigail; Govoni, David; Masaki, Derek; Poore, Barbara; Simpson, Annie; Tessler, Steven

    2013-01-01

    What U.S. Geological Survey (USGS) programs use citizen science? How can projects be best designed while meeting policy requirements? What are the most effective volunteer recruitment methods? What data should be collected to ensure validation and how should data be stored? What standard protocols are most easily used by volunteers? Can data from multiple projects be integrated to support new research or existing science questions? To help answer these and other questions, the USGS Community of Data Integration (CDI) supported the development of the Citizen Science Working Group (CSWG) in August 2011 and funded the working group’s proposal to hold a USGS Citizen Science Workshop in fiscal year 2012. The stated goals for our workshop were: raise awareness of programs and projects in the USGS that incorporate citizen science, create a community of practice for the sharing of knowledge and experiences, provide a forum to discuss the challenges of—and opportunities for—incorporating citizen science into USGS projects, and educate and support scientists and managers whose projects may benefit from public participation in science.To meet these goals, the workshop brought together 50 attendees (see appendix A for participant details) representing the USGS, partners, and external citizen science practitioners from diverse backgrounds (including scientists, managers, project coordinators, and technical developers, for example) to discuss these topics at the Denver Federal Center in Colorado on September 11–12, 2012. Over two and a half days, attendees participated in four major plenary sessions (Citizen Science Policy and Challenges, Engaging the Public in Scientific Research, Data Collection and Management, and Technology and Tools) comprised of 25 invited presentations and followed by structured discussions for each session designed to address both prepared and ad hoc "big questions." A number of important community support and infrastructure needs were identified from the sessions and discussions, and a subteam was formed to draft a strategic vision statement to guide and prioritize future USGS efforts to support the citizen science community. Attendees also brainstormed proposal ideas for the fiscal year 2013 CDI request for proposals: one possible venue to support the execution of the vision.

  17. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  18. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  19. Data science, learning, and applications to biomedical and health sciences.

    PubMed

    Adam, Nabil R; Wieder, Robert; Ghosh, Debopriya

    2017-01-01

    The last decade has seen an unprecedented increase in the volume and variety of electronic data related to research and development, health records, and patient self-tracking, collectively referred to as Big Data. Properly harnessed, Big Data can provide insights and drive discovery that will accelerate biomedical advances, improve patient outcomes, and reduce costs. However, the considerable potential of Big Data remains unrealized owing to obstacles including a limited ability to standardize and consolidate data and challenges in sharing data, among a variety of sources, providers, and facilities. Here, we discuss some of these challenges and potential solutions, as well as initiatives that are already underway to take advantage of Big Data. © 2017 New York Academy of Sciences.

  20. Can Any Good Thing Come out of Nazareth? (John 1:46) 1999 George C. Pimentel Award, sponsored by Union Carbide Corporation

    NASA Astrophysics Data System (ADS)

    Orna, Mary Virginia

    1999-09-01

    We are in the era of Big Science, which also means big institutions where the Big Science is done. However, higher education in the United States is unique in that parallel to the array of big institutions is a system of small liberal arts and sciences colleges where students receive the personal attention and faculty contact that is often not possible at larger institutions. While these smaller institutions are limited in resources and finances, studies have shown that they contribute a disproportionately higher number of leaders across a spectrum of disciplines, including chemistry. This address summarizes my personal odyssey and the reasons for the award. In it, I emphasize the advantages enjoyed by liberal arts and sciences students and faculty that enable them to overcome the view that great things can only be done in large, cosmopolitan settings.

  1. NASA Astrophysics EPO Resources For Engaging Girls in Science

    NASA Astrophysics Data System (ADS)

    Sharma, M.; Mendoza, D.; Smith, D.; Hasan, H.

    2011-09-01

    A new collaboration among the NASA Science Mission Directorate (SMD) Astrophysics EPO community is to engage girls in science who do not self-select as being interested in science, through the library setting. The collaboration seeks to (i) improve how girls view themselves as someone who knows about, uses, and sometimes contributes to science, and (ii) increase the capacity of EPO practitioners and librarians (both school and public) to engage girls in science. As part of this collaboration, we are collating the research on audience needs and best practices, and SMD EPO resources, activities and projects that focus on or can be recast toward engaging girls in science. This ASP article highlights several available resources and individual projects, such as: (i) Afterschool Universe, an out-of-school hands-on astronomy curriculum targeted at middle school students and an approved Great Science for Girls curriculum; (ii) Big Explosions and Strong Gravity, a Girl Scout patch-earning event for middle school aged girls to learn astronomy through hands-on activities and interaction with actual astronomers; and (iii) the JWST-NIRCAM Train the Trainer workshops and activities for Girl Scouts of USA leaders; etc. The NASA Astrophysics EPO community welcomes the broader EPO community to discuss with us how best to engage non-science-attentive girls in science, technology, engineering, and mathematics (STEM), and to explore further collaborations on this theme.

  2. Advanced Research Projects Agency - Energy (ARPA-E): Background, Status, and Selected Issues for Congress

    DTIC Science & Technology

    2009-04-29

    in 2007. It effectively began operation in February 2008 when its first director, Lisa Porter, began to manage the organization. IARPA is considered...47 Personal Communication with Lisa Porter, Director, IARPA, January 23, 2009. Sally Adde, “Q&A With: IARPA Director Lisa Porter,” IEEE...continued) 109-39 (Washington: GPO, 2006). 50 John M. Broder and Matthew L. Wald , “Big Science Role Is Seen in Global Warming Cure,” New

  3. Multicore: Fallout from a Computing Evolution

    ScienceCinema

    Yelick, Kathy [Director, NERSC

    2017-12-09

    July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  4. Publications - GMC 411 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    outcrop samples from the 2001 DGGS Salcha River-Pogo project, Big Delta Quadrangle, Alaska Authors . Quadrangle(s): Big Delta Bibliographic Reference Newberry, R.J., 2012, Whole-rock and trace element analyses of two amphibolite outcrop samples from the 2001 DGGS Salcha River-Pogo project, Big Delta Quadrangle

  5. Consortium biology in immunology: the perspective from the Immunological Genome Project.

    PubMed

    Benoist, Christophe; Lanier, Lewis; Merad, Miriam; Mathis, Diane

    2012-10-01

    Although the field has a long collaborative tradition, immunology has made less use than genetics of 'consortium biology', wherein groups of investigators together tackle large integrated questions or problems. However, immunology is naturally suited to large-scale integrative and systems-level approaches, owing to the multicellular and adaptive nature of the cells it encompasses. Here, we discuss the value and drawbacks of this organization of research, in the context of the long-running 'big science' debate, and consider the opportunities that may exist for the immunology community. We position this analysis in light of our own experience, both positive and negative, as participants of the Immunological Genome Project.

  6. Open Research Challenges with Big Data - A Data-Scientist s Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R

    In this paper, we discuss data-driven discovery challenges of the Big Data era. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny and evaluation for gleaning insights from the data than ever before. In that context, we pose and debate the question - Are data mining algorithms scaling with the ability to store and compute? If yes, how? If not, why not? We survey recent developments in the state-of-the-art to discuss emergingmore » and outstanding challenges in the design and implementation of machine learning algorithms at scale. We leverage experience from real-world Big Data knowledge discovery projects across domains of national security, healthcare and manufacturing to suggest our efforts be focused along the following axes: (i) the data science challenge - designing scalable and flexible computational architectures for machine learning (beyond just data-retrieval); (ii) the science of data challenge the ability to understand characteristics of data before applying machine learning algorithms and tools; and (iii) the scalable predictive functions challenge the ability to construct, learn and infer with increasing sample size, dimensionality, and categories of labels. We conclude with a discussion of opportunities and directions for future research.« less

  7. Success in large high-technology projects: What really works?

    NASA Astrophysics Data System (ADS)

    Crosby, P.

    2014-08-01

    Despite a plethora of tools, technologies and management systems, successful execution of big science and engineering projects remains problematic. The sheer scale of globally funded projects such as the Large Hadron Collider and the Square Kilometre Array telescope means that lack of project success can impact both on national budgets, and collaborative reputations. In this paper, I explore data from contemporary literature alongside field research from several current high-technology projects in Europe and Australia, and reveal common `pressure points' that are shown to be key influencers of project control and success. I discuss the how mega-science projects sit between being merely complicated, and chaotic, and explain the importance of understanding multiple dimensions of project complexity. Project manager/leader traits are briefly discussed, including capability to govern and control such enterprises. Project structures are examined, including the challenge of collaborations. I show that early attention to building project resilience, curbing optimism, and risk alertness can help prepare large high-tech projects against threats, and why project managers need to understand aspects of `the silent power of time'. Mission assurance is advanced as a critical success function, alongside the deployment of task forces and new combinations of contingency plans. I argue for increased project control through industrial-style project reviews, and show how post-project reviews are an under-used, yet invaluable avenue of personal and organisational improvement. Lastly, I discuss the avoidance of project amnesia through effective capture of project knowledge, and transfer of lessons-learned to subsequent programs and projects.

  8. Frontier Scientists' project probes audience science interests with website, social media, TV broadcast, game, and pop-up book

    NASA Astrophysics Data System (ADS)

    O'Connell, E. A.

    2017-12-01

    The Frontier Scientists National Science Foundation project titled Science in Alaska: Using Multimedia to Support Science Education produced research products in several formats: videos short and long, blogs, social media, a computer game, and a pop-up book. These formats reached distinctly different audiences. Internet users, public TV viewers, gamers, schools, and parents & young children were drawn to Frontier Scientists' research in direct and indirect ways. The analytics (our big data) derived from this media broadcast has given us insight into what works, what doesn't, next steps. We have evidence for what is needed to present science as an interesting, vital, and a necessary component for the general public's daily information diet and as an important tool for scientists to publicize research and to thrive in their careers. Collaborations with scientists at several Universities, USGS, Native organizations, tourism organizations, and Alaska Museums promoted accuracy of videos and increased viewing. For example, Erin Marbarger, at Anchorage Museum, edited, and provided Spark!Lab to test parents & child's interest in the pop-up book titled: The Adventures of Apun the Arctic Fox. Without a marketing budget Frontier Scientist's minimum publicity, during the three year project, still drew an audience. Frontier Scientists was awarded Best Website 2016 by the Alaska Press Club, and won a number of awards for short videos and TV programs.

  9. Remotely Operated Vehicles (ROVs) Provide a "Big Data Progression"

    NASA Astrophysics Data System (ADS)

    Oostra, D.; Sanghera, S. S.; Mangosing, D. C., Jr.; Lewis, P. M., Jr.; Chambers, L. H.

    2015-12-01

    This year, science and technology teams at the NASA Langley Science Directorate were challenged with creating an API-based web application using RockBlock Mobile sensors mounted on a zero pressure high-altitude balloon. The system tracks and collects meteorological data parameters and visualizes this data in near real time, using a MEAN development stack to create an HTML5 based tool that can send commands to the vehicle, parse incoming data, and perform other functions to store and serve data to other devices. NASA developers and science educators working on this project saw an opportunity to use this emerging technology to address a gap identified in science education between middle and high school curricula. As students learn about data analysis in elementary and middle school, they are taught to collect data from in situ sources. In high school, students are then asked to work with remotely sensed data, without always having the experience or understanding of how that data is collected. We believe that using ROVs to create a "big data progression" for students will not only enhance their ability to understand how remote satellite data is collected, but will also provide the outlet for younger students to expand their interest in science and data prior to entering high school. In this presentation, we will share and discuss our experiences with ROVs, APIs and data viz applications, with a focus on the next steps for developing this emerging capability.

  10. An Initial Analysis of Learning Styles Exhibited by High School Science Students

    NASA Astrophysics Data System (ADS)

    Donelson, Frederick; Bensel, H.; Miller, D.; Seebode, S.; Ciardi, D. R.; Howell, S. B.

    2014-01-01

    Educational research magazines are filled with information on learning styles and how they affect the learning process, but few studies have been conducted to specifically look at learning styles exhibited by high school science students. This project attempted to obtain a general “snapshot” of learning styles found in the high school science classroom, and then compare that to one derived from a subgroup of highly motivated science students involved in a NITARP student team. Control students (N=54) from elective science courses at four high schools (urban, suburban, and rural) were administered the Felder Learning Style (FLS) assessment and rated on Likert scales in four learning constructs: Active/Reflective, Sensing/Intuitive, Visual/Verbal, and Sequential/Global. NITARP student team members (N=7) were given the FLS before project work began, and then re-tested approximately three months later, after project work concluded. Chi Square Analysis showed no clear significant difference between the general group and the NITARP group (p = .52). Both groups tended to be very visual and sequential, but more reflective than active. The results suggest several concerns that science teachers may need to address: (1) Research shows best practice science classes often are hands on, yet a majority of students are more reflective than active; (2) Big ideas tend to be better understood by global students, but a majority are more sequential; (3) Since a majority of students are visual, information given verbally may not be very effective. Further research is indicated for these areas of discontinuity. This research was conducted as part of the NASA/IPAC Training in Archival Research Project (NITARP) and was funded by NASA Astrophysics Data Program and Archive Outreach funds.

  11. Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century

    PubMed Central

    Zhang, Xinzhi; Pérez-Stable, Eliseo J.; Bourne, Philip E.; Peprah, Emmanuel; Duru, O. Kenrik; Breen, Nancy; Berrigan, David; Wood, Fred; Jackson, James S.; Wong, David W.S.; Denny, Joshua

    2017-01-01

    Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged populations over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most importantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention development. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health disparities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for reducing health disparities but without active engagement, it risks further widening them. PMID:28439179

  12. Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century.

    PubMed

    Zhang, Xinzhi; Pérez-Stable, Eliseo J; Bourne, Philip E; Peprah, Emmanuel; Duru, O Kenrik; Breen, Nancy; Berrigan, David; Wood, Fred; Jackson, James S; Wong, David W S; Denny, Joshua

    2017-01-01

    Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged populations over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most importantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention development. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health disparities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for reducing health disparities but without active engagement, it risks further widening them.

  13. Data Discovery of Big and Diverse Climate Change Datasets - Options, Practices and Challenges

    NASA Astrophysics Data System (ADS)

    Palanisamy, G.; Boden, T.; McCord, R. A.; Frame, M. T.

    2013-12-01

    Developing data search tools is a very common, but often confusing, task for most of the data intensive scientific projects. These search interfaces need to be continually improved to handle the ever increasing diversity and volume of data collections. There are many aspects which determine the type of search tool a project needs to provide to their user community. These include: number of datasets, amount and consistency of discovery metadata, ancillary information such as availability of quality information and provenance, and availability of similar datasets from other distributed sources. Environmental Data Science and Systems (EDSS) group within the Environmental Science Division at the Oak Ridge National Laboratory has a long history of successfully managing diverse and big observational datasets for various scientific programs via various data centers such as DOE's Atmospheric Radiation Measurement Program (ARM), DOE's Carbon Dioxide Information and Analysis Center (CDIAC), USGS's Core Science Analytics and Synthesis (CSAS) metadata Clearinghouse and NASA's Distributed Active Archive Center (ORNL DAAC). This talk will showcase some of the recent developments for improving the data discovery within these centers The DOE ARM program recently developed a data discovery tool which allows users to search and discover over 4000 observational datasets. These datasets are key to the research efforts related to global climate change. The ARM discovery tool features many new functions such as filtered and faceted search logic, multi-pass data selection, filtering data based on data quality, graphical views of data quality and availability, direct access to data quality reports, and data plots. The ARM Archive also provides discovery metadata to other broader metadata clearinghouses such as ESGF, IASOA, and GOS. In addition to the new interface, ARM is also currently working on providing DOI metadata records to publishers such as Thomson Reuters and Elsevier. The ARM program also provides a standards based online metadata editor (OME) for PIs to submit their data to the ARM Data Archive. USGS CSAS metadata Clearinghouse aggregates metadata records from several USGS projects and other partner organizations. The Clearinghouse allows users to search and discover over 100,000 biological and ecological datasets from a single web portal. The Clearinghouse also enabled some new data discovery functions such as enhanced geo-spatial searches based on land and ocean classifications, metadata completeness rankings, data linkage via digital object identifiers (DOIs), and semantically enhanced keyword searches. The Clearinghouse also currently working on enabling a dashboard which allows the data providers to look at various statistics such as number their records accessed via the Clearinghouse, most popular keywords, metadata quality report and DOI creation service. The Clearinghouse also publishes metadata records to broader portals such as NSF DataONE and Data.gov. The author will also present how these capabilities are currently reused by the recent and upcoming data centers such as DOE's NGEE-Arctic project. References: [1] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L., Killeffer, T., Krassovski, M., ... & Frame, M. (2014, October). OME: Tool for generating and managing metadata to handle BigData. In BigData Conference (pp. 8-10).

  14. Passeport pour les deux infinis: an educational project in French

    NASA Astrophysics Data System (ADS)

    Arnaud, Nicolas; Descotes-Genon, Sébastien; Kerhoas-Cavata, Sophie; Paul, Jacques; Robert-Esil, Jean-Luc; Royole-Degieux, Perrine

    2016-04-01

    Passeport pour les deux infinis (;Passport for the two infinities;, in short Pass2i) is a French educational project aiming at promoting the physics of the infinitely small (particle physics) and of the infinitely big (cosmology & astrophysics) to high-school teachers and students. It is managed since 2009 by a small team of outreach experts (physicists and engineers) from the CNRS and the CEA. The Pass2i cornerstone is a reversible book - where each side explores one of the two infinities - and which is given for free to science high school teachers who request it, thanks to the support of French funding agencies. The Pass2i non-profit association wants to be a bridge between science and education: training sessions are organized for teachers, educational resources created and made available for download on the Pass2i website (http://www.passeport2i.fr).

  15. What's the Big Deal? Collection Evaluation at the National Level

    ERIC Educational Resources Information Center

    Jurczyk, Eva; Jacobs, Pamela

    2014-01-01

    This article discusses a project undertaken to assess the journals in a Big Deal package by applying a weighted value algorithm measuring quality, utility, and value of individual titles. Carried out by a national library consortium in Canada, the project confirmed the value of the Big Deal package while providing a quantitative approach for…

  16. Big data in forensic science and medicine.

    PubMed

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. Live Storybook Outcomes of Pilot Multidisciplinary Elementary Earth Science Collaborative Project

    NASA Astrophysics Data System (ADS)

    Soeffing, C.; Pierson, R.

    2017-12-01

    Live Storybook Outcomes of pilot multidisciplinary elementary earth science collaborative project Anchoring phenomena leading to student led investigations are key to applying the NGSS standards in the classroom. This project employs the GLOBE elementary storybook, Discoveries at Willow Creek, as an inspiration and operational framework for a collaborative pilot project engaging 4th grade students in asking questions, collecting relevant data, and using analytical tools to document and understand natural phenomena. The Institute of Global Environmental Strategies (IGES), a GLOBE Partner, the Outdoor Campus, an informal educational outdoor learning facility managed by South Dakota Game, Fish and Parks, University of Sioux Falls, and All City Elementary, Sioux Falls are collaborating partners in this project. The Discoveries at Willow Creek storyline introduces young students to the scientific process, and models how they can apply science and engineering practices (SEPs) to discover and understand the Earth system in which they live. One innovation associated with this project is the formal engagement of elementary students in a global citizen science program (for all ages), GLOBE Observer, and engaging them in data collection using GLOBE Observer's Cloud and Mosquito Habitat Mapper apps. As modeled by the fictional students from Willow Creek, the 4th grade students will identify their 3 study sites at the Outdoor Campus, keep a journal, and record observations. The students will repeat their investigations at the Outdoor Campus to document and track change over time. Students will be introduced to "big data" in a manageable way, as they see their observations populate GLOBE's map-based data visualization and . Our research design recognizes the comfort and familiarity factor of literacy activities in the elementary classroom for students and teachers alike, and postulates that connecting a science education project to an engaging storybook text will contribute to a successful implementation and measurable learning outcomes. We will report on the Fall 2017 pilot metrics of success, along with a discussion of multi partner collaborations, project scale-up and sustainability.

  18. Nano-Bio-Genesis: tracing the rise of nanotechnology and nanobiotechnology as 'big science'

    PubMed Central

    Kulkarni, Rajan P

    2007-01-01

    Nanotechnology research has lately been of intense interest because of its perceived potential for many diverse fields of science. Nanotechnology's tools have found application in diverse fields, from biology to device physics. By the 1990s, there was a concerted effort in the United States to develop a national initiative to promote such research. The success of this effort led to a significant influx of resources and interest in nanotechnology and nanobiotechnology and to the establishment of centralized research programs and facilities. Further government initiatives (at federal, state, and local levels) have firmly cemented these disciplines as 'big science,' with efforts increasingly concentrated at select laboratories and centers. In many respects, these trends mirror certain changes in academic science over the past twenty years, with a greater emphasis on applied science and research that can be more directly utilized for commercial applications. We also compare the National Nanotechnology Initiative and its successors to the Human Genome Project, another large-scale, government funded initiative. These precedents made acceptance of shifts in nanotechnology easier for researchers to accept, as they followed trends already established within most fields of science. Finally, these trends are examined in the design of technologies for detection and treatment of cancer, through the Alliance for Nanotechnology in Cancer initiative of the National Cancer Institute. Federal funding of these nanotechnology initiatives has allowed for expansion into diverse fields and the impetus for expanding the scope of research of several fields, especially biomedicine, though the ultimate utility and impact of all these efforts remains to be seen. PMID:17629932

  19. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. ["Big data" - large data, a lot of knowledge?].

    PubMed

    Hothorn, Torsten

    2015-01-28

    Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.

  1. Taming the Data Deluge to Unravel the Mysteries of the Universe

    NASA Astrophysics Data System (ADS)

    Johnston-Hollitt, M.

    2017-04-01

    Modern Astrophysics is one of the most data intensive research fields in the world and is driving many of the required innovations in the "big data" space. Foremost in astronomy in terms of data generation is radio astronomy, and in the last decade an increase in global interest and investment in the field had led to a large number of new or upgraded facilities which are each currently generating petabytes of data per annum. The peak of this so-called 'radio renaissance' will be the Square Kilometre Array (SKA) - a global observatory designed to uncover the mysteries of the Universe. The SKA will create the highest resolution, fastest frame rate movie of the evolving Universe ever and in doing so will generate 160 terrabytes of data a second, or close to 5 zettabytes of data per annum. Furthermore, due to the extreme faintness of extraterrestrial radio signals, the telescope elements for the SKA must be located in radio quite parts of the world with very low population density. Thus the project aims to build the most data intensive scientific experiment ever, in some of the most remote places on Earth. Generating and serving scientific data products of this scale to a global community of researchers from remote locations is just the first of the "big data" challenges the project faces. Coordination of a global network of tiered data resources will be required along with software tools to exploit the vast sea of results generated. In fact, to fully realize the enormous scientific potential of this project, we will need not only better data distribution and coordination mechanisms, but also improved algorithms, artificial intelligence and ontologies to extract knowledge in an automated way at a scale not yet attempted in science. In this keynote I will present an overview of the SKA project, outline the "big data" challenges the project faces and discuss some of the approaches we are taking to tame the astronomical data deluge we face.

  2. "Air Toxics under the Big Sky": Examining the Effectiveness of Authentic Scientific Research on High School Students' Science Skills and Interest

    ERIC Educational Resources Information Center

    Ward, Tony J.; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    "Air Toxics Under the Big Sky" is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. This research explored: (1)…

  3. Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)

    ScienceCinema

    Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2018-05-07

    Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  4. Neuroscience thinks big (and collaboratively).

    PubMed

    Kandel, Eric R; Markram, Henry; Matthews, Paul M; Yuste, Rafael; Koch, Christof

    2013-09-01

    Despite cash-strapped times for research, several ambitious collaborative neuroscience projects have attracted large amounts of funding and media attention. In Europe, the Human Brain Project aims to develop a large-scale computer simulation of the brain, whereas in the United States, the Brain Activity Map is working towards establishing a functional connectome of the entire brain, and the Allen Institute for Brain Science has embarked upon a 10-year project to understand the mouse visual cortex (the MindScope project). US President Barack Obama's announcement of the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies Initiative) in April 2013 highlights the political commitment to neuroscience and is expected to further foster interdisciplinary collaborations, accelerate the development of new technologies and thus fuel much needed medical advances. In this Viewpoint article, five prominent neuroscientists explain the aims of the projects and how they are addressing some of the questions (and criticisms) that have arisen.

  5. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2017-12-22

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  6. From big data to deep insight in developmental science

    PubMed Central

    2016-01-01

    The use of the term ‘big data’ has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data ‘big’ and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. WIREs Cogn Sci 2016, 7:112–126. doi: 10.1002/wcs.1379 For further resources related to this article, please visit the WIREs website. PMID:26805777

  7. Reference Data Layers for Earth and Environmental Science: History, Frameworks, Science Needs, Approaches, and New Technologies

    NASA Astrophysics Data System (ADS)

    Lenhardt, W. C.

    2015-12-01

    Global Mapping Project, Web-enabled Landsat Data (WELD), International Satellite Land Surface Climatology Project (ISLSCP), hydrology, solid earth dynamics, sedimentary geology, climate modeling, integrated assessments and so on all have needs for or have worked to develop consistently integrated data layers for Earth and environmental science. This paper will present an overview of an abstract notion of data layers of this types, what we are referring to as reference data layers for Earth and environmental science, highlight some historical examples, and delve into new approaches. The concept of reference data layers in this context combines data availability, cyberinfrastructure and data science, as well as domain science drivers. We argue that current advances in cyberinfrastructure such as iPython notebooks and integrated science processing environments such as iPlant's Discovery Environment coupled with vast arrays of new data sources warrant another look at the how to create, maintain, and provide reference data layers. The goal is to provide a context for understanding science needs for reference data layers to conduct their research. In addition, to the topics described above this presentation will also outline some of the challenges to and present some ideas for new approaches to addressing these needs. Promoting the idea of reference data layers is relevant to a number of existing related activities such as EarthCube, RDA, ESIP, the nascent NSF Regional Big Data Innovation Hubs and others.

  8. The Big Shift: How the University of Houston Libraries Moved Everything

    ERIC Educational Resources Information Center

    Sharpe, Paul A.

    2012-01-01

    More than a project, The Big Shift is an epic library tale for the ages. What starts as a simple collection shift grows into a major space-planning project, a lesson in catalog maintenance, and thanks to Hurricane Ike, a disaster recovery effort. The Big Shift tells the story of how the University of Houston Libraries handled the numerous…

  9. Big Data breaking barriers - first steps on a long trail

    NASA Astrophysics Data System (ADS)

    Schade, S.

    2015-04-01

    Most data sets and streams have a geospatial component. Some people even claim that about 80% of all data is related to location. In the era of Big Data this number might even be underestimated, as data sets interrelate and initially non-spatial data becomes indirectly geo-referenced. The optimal treatment of Big Data thus requires advanced methods and technologies for handling the geospatial aspects in data storage, processing, pattern recognition, prediction, visualisation and exploration. On the one hand, our work exploits earth and environmental sciences for existing interoperability standards, and the foundational data structures, algorithms and software that are required to meet these geospatial information handling tasks. On the other hand, we are concerned with the arising needs to combine human analysis capacities (intelligence augmentation) with machine power (artificial intelligence). This paper provides an overview of the emerging landscape and outlines our (Digital Earth) vision for addressing the upcoming issues. We particularly request the projection and re-use of the existing environmental, earth observation and remote sensing expertise in other sectors, i.e. to break the barriers of all of these silos by investigating integrated applications.

  10. Nursing Management Minimum Data Set: Cost-Effective Tool To Demonstrate the Value of Nurse Staffing in the Big Data Science Era.

    PubMed

    Pruinelli, Lisiane; Delaney, Connie W; Garciannie, Amy; Caspers, Barbara; Westra, Bonnie L

    2016-01-01

    There is a growing body of evidence of the relationship of nurse staffing to patient, nurse, and financial outcomes. With the advent of big data science and developing big data analytics in nursing, data science with the reuse of big data is emerging as a timely and cost-effective approach to demonstrate nursing value. The Nursing Management Minimum Date Set (NMMDS) provides standard administrative data elements, definitions, and codes to measure the context where care is delivered and, consequently, the value of nursing. The integration of the NMMDS elements in the current health system provides evidence for nursing leaders to measure and manage decisions, leading to better patient, staffing, and financial outcomes. It also enables the reuse of data for clinical scholarship and research.

  11. Technical Challenges and Opportunities of Centralizing Space Science Mission Operations (SSMO) at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ido, Haisam; Burns, Rich

    2015-01-01

    The NASA Goddard Space Science Mission Operations project (SSMO) is performing a technical cost-benefit analysis for centralizing and consolidating operations of a diverse set of missions into a unified and integrated technical infrastructure. The presentation will focus on the notion of normalizing spacecraft operations processes, workflows, and tools. It will also show the processes of creating a standardized open architecture, creating common security models and implementations, interfaces, services, automations, notifications, alerts, logging, publish, subscribe and middleware capabilities. The presentation will also discuss how to leverage traditional capabilities, along with virtualization, cloud computing services, control groups and containers, and possibly Big Data concepts.

  12. News Conference: The Big Bangor Day Meeting Lecture: Charterhouse plays host to a physics day Festival: Science on Stage festival 2013 arrives in Poland Event: Scottish Physics Teachers' Summer School Meeting: Researchers and educators meet at Lund University Conference: Exeter marks the spot Recognition: European Physical Society uncovers an historic site Education: Initial teacher education undergoes big changes Forthcoming events

    NASA Astrophysics Data System (ADS)

    2013-09-01

    Conference: The Big Bangor Day Meeting Lecture: Charterhouse plays host to a physics day Festival: Science on Stage festival 2013 arrives in Poland Event: Scottish Physics Teachers' Summer School Meeting: Researchers and educators meet at Lund University Conference: Exeter marks the spot Recognition: European Physical Society uncovers an historic site Education: Initial teacher education undergoes big changes Forthcoming events

  13. Next Generation Astronomical Data Processing using Big Data Technologies from the Apache Software Foundation

    NASA Astrophysics Data System (ADS)

    Mattmann, Chris

    2014-04-01

    In this era of exascale instruments for astronomy we must naturally develop next generation capabilities for the unprecedented data volume and velocity that will arrive due to the veracity of these ground-based sensor and observatories. Integrating scientific algorithms stewarded by scientific groups unobtrusively and rapidly; intelligently selecting data movement technologies; making use of cloud computing for storage and processing; and automatically extracting text and metadata and science from any type of file are all needed capabilities in this exciting time. Our group at NASA JPL has promoted the use of open source data management technologies available from the Apache Software Foundation (ASF) in pursuit of constructing next generation data management and processing systems for astronomical instruments including the Expanded Very Large Array (EVLA) in Socorro, NM and the Atacama Large Milimetre/Sub Milimetre Array (ALMA); as well as for the KAT-7 project led by SKA South Africa as a precursor to the full MeerKAT telescope. In addition we are funded currently by the National Science Foundation in the US to work with MIT Haystack Observatory and the University of Cambridge in the UK to construct a Radio Array of Portable Interferometric Devices (RAPID) that will undoubtedly draw from the rich technology advances underway. NASA JPL is investing in a strategic initiative for Big Data that is pulling in these capabilities and technologies for astronomical instruments and also for Earth science remote sensing. In this talk I will describe the above collaborative efforts underway and point to solutions in open source from the Apache Software Foundation that can be deployed and used today and that are already bringing our teams and projects benefits. I will describe how others can take advantage of our experience and point towards future application and contribution of these tools.

  14. Assessing Conceptual Understanding via Literacy-Infused, Inquiry-Based Science among Middle School English Learners and Economically-Challenged Students

    ERIC Educational Resources Information Center

    Lara-Alecio, Rafael; Irby, Beverly J.; Tong, Fuhui; Guerrero, Cindy; Koch, Janice; Sutton-Jones, Kara L.

    2018-01-01

    The overarching purpose of our study was to compare performances of treatment and control condition students who completed a literacy-infused, inquiry-based science intervention through sixth grade as measured by a big idea assessment tool which we refer to as the Big Ideas in Science Assessment (BISA). First, we determine the concurrent validity…

  15. Big-Data-Driven Stem Cell Science and Tissue Engineering: Vision and Unique Opportunities.

    PubMed

    Del Sol, Antonio; Thiesen, Hans J; Imitola, Jaime; Carazo Salas, Rafael E

    2017-02-02

    Achieving the promises of stem cell science to generate precise disease models and designer cell samples for personalized therapeutics will require harnessing pheno-genotypic cell-level data quantitatively and predictively in the lab and clinic. Those requirements could be met by developing a Big-Data-driven stem cell science strategy and community. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Clinton Administration announces FY 2001 budget request

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Blessed with a strong US. economy the Clinton Administration on February 7 released a fiscal year 2001 federal budget request totaling a whopping $1,835 billion. Most of the funding request is slated for big ticket items including Social Security defense spending, Medicaid, Medicare, and paying down the federal debt. However, within the 19% of the budget that funds non-defense discretionary programs,science agencies receive fairly healthy increases.The National Science Foundation (NSF) budget request would increase NSF funding by 17.3% $675 million and bring the total budget request to $4.6 billion. This includes significant increases for several initiatives: biocomplexity in the environment, information technology research, nanoscale science and engineering, and 21st century workforce. Among the major Earth science projects are launching the Earthscope initiative which includes the US Array and San Andreas Fault Observatory at Depth (SAFOD) and the National Ecological Observatory Network (NEON).

  17. [Empowerment of women in difficult life situations: the BIG project].

    PubMed

    Rütten, A; Röger, U; Abu-Omar, K; Frahsa, A

    2008-12-01

    BIG is a project for the promotion of physical activity among women in difficult life situations. Following the main health promotion principles of the WHO, the women shall be enabled or empowered to take control of determinants of their health. A comprehensive participatory approach was applied and women were included in planning, implementing and evaluating the project. For measuring the effects of BIG on the empowerment of participating women, qualitative semi-structured interviews with 15 women participating in BIG were conducted. For data analysis, qualitative content analysis was used. Results showed the empowerment of the women on the individual level as they gained different competencies and perceived self-efficacy. These effects were supported through the empowerment process on the organizational and community levels where women gained control over their life situations and over policies influencing them. Therefore, the participatory approach of BIG is a key success factor for empowerment promotion of women in difficult life situations.

  18. The BIG Data Center: from deposition to integration to translation

    PubMed Central

    2017-01-01

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. PMID:27899658

  19. Small Particles, Big Science: The International LBNF/DUNE Project

    ScienceCinema

    None

    2018-06-25

    Neutrinos are the most abundant matter particles in the universe, yet very little is known about them. This animation shows how the Department of Energy’s Long-Baseline Neutrino Facility will power the Deep Underground Neutrino Experiment to help scientists understand the role neutrinos play in the universe. DUNE will also look for the birth of neutron stars and black holes by catching neutrinos from exploding stars. More than 800 scientists from 150 institutions in 27 countries are working on the LBNF/DUNE project, including Armenia, Belgium, Brazil, Bulgaria, Canada, Colombia, Czech Republic, Finland, France, Greece, India, Iran, Italy, Japan, Madagascar, Mexico, Netherlands, Peru, Poland, Romania, Russia, Spain, Switzerland, Turkey, Ukraine, United Kingdom, USA.

  20. European Neutrons form Parasitic Research to Global Strategy: Realizing Plans for a Transnational European Spallation Source in the Wake of the Cold War

    NASA Astrophysics Data System (ADS)

    Kaiserfeld, Thomas

    2016-03-01

    Studies of Big Science have early on focused on instrumentation and scientific co-operation in large organizations, later on to take into account symbolic values and specific research styles while more recently also involving the relevance of commercial interests and economic development as well as the assimilation of research traditions. In accordance with these transformed practices, this presentation will analyze how an organization with the purpose of realizing a Big-Science facility, The European Spallation Source, has successfully managed to present the project as relevant to different national and international policy-makers, to the community of European neutron researchers as well as to different industrial interests. All this has been achieved in a research-policy environment, which has been the subject to drastic transformations, from calls to engage researchers from the former eastern bloc in the early 1990s via competition with American and Asian researchers at the turn of the century 2000 to intensified demands on business applications. During this process, there has also been fierce competition between different potential sites in the U.K., Germany, Spain, Hungary and Sweden, not once, but twice. The project has in addition been plagued by withdrawals of key actors as well as challenging problems in the field of spallation-source construction. Nevertheless, the European Spallation Source has survived from the early 1990s until today, now initiating the construction process at Lund in southern Sweden. In this presentation, the different measures taken and arguments raised by the European Spallation Source project in order to realize the facility will be analysed. Especially the different designs of the European Spallation Source will be analysed as responses to external demands and threats.

  1. Mario Bunge, Systematic Philosophy and Science Education: An Introduction

    NASA Astrophysics Data System (ADS)

    Matthews, Michael R.

    2012-10-01

    Mario Bunge was born in Argentina in 1919 and is now in his mid-90s. He studied atomic physics and quantum mechanics with Guido Beck (1903-1988), an Austrian refugee and student of Heisenberg. Additionally he studied modern philosophy in an environment that was a philosophical backwater becoming the first South American philosopher of science to be trained in science. His publications in physics, philosophy, psychology, sociology and the foundations of biology, are staggering in number, and include a massive 8-volume Treatise on Philosophy. The unifying thread of his scholarship is the constant and vigorous advancement of the Enlightenment Project, and criticism of cultural and academic movements that deny or devalue the core planks of the project: namely its naturalism, the search for truth, the universality of science, the value of rationality, and respect for individuals. At a time when specialisation is widely decried, and its deleterious effects on science, philosophy of science, educational research and science teaching are recognised, and at a time when `grand narratives' are thought both undesirable and impossible—it is salutary to appraise the fruits of one person's pursuit of the `Big' scientific and philosophical picture or grand narrative. In doing so this special issue brings together philosophers, physicists, biologists, sociologists, logicians, cognitive scientists, economists and mathematicians to examine facets of Mario Bunge's systematic philosophy and to appraise its contribution to important issues in current philosophy and, by implication, education.

  2. Challenges for Data Archival Centers in Evolving Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Cook, R. B.; Gu, L.; Santhana Vannan, S. K.; Beaty, T.

    2015-12-01

    Environmental science has entered into a big data era as enormous data about the Earth environment are continuously collected through field and airborne missions, remote sensing observations, model simulations, sensor networks, etc. An open-access and open-management data infrastructure for data-intensive science is a major grand challenge in global environmental research (BERAC, 2010). Such an infrastructure, as exemplified in EOSDIS, GEOSS, and NSF EarthCube, will provide a complete lifecycle of environmental data and ensures that data will smoothly flow among different phases of collection, preservation, integration, and analysis. Data archival centers, as the data integration units closest to data providers, serve as the source power to compile and integrate heterogeneous environmental data into this global infrastructure. This presentation discusses the interoperability challenges and practices of geosciences from the aspect of data archival centers, based on the operational experiences of the NASA-sponsored Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) and related environmental data management activities. Specifically, we will discuss the challenges to 1) encourage and help scientists to more actively share data with the broader scientific community, so that valuable environmental data, especially those dark data collected by individual scientists in small independent projects, can be shared and integrated into the infrastructure to tackle big science questions; 2) curate heterogeneous multi-disciplinary data, focusing on the key aspects of identification, format, metadata, data quality, and semantics to make them ready to be plugged into a global data infrastructure. We will highlight data curation practices at the ORNL DAAC for global campaigns such as BOREAS, LBA, SAFARI 2000; and 3) enhance the capabilities to more effectively and efficiently expose and deliver "big" environmental data to broad range of users and systems. Experiences and challenges with integrating large data sets via the ORNL DAAC's data discovery and delivery Web services will be discussed.

  3. Putting Carbon in its Place: What You Can Do (LBNL Science at the Theater)

    ScienceCinema

    Walker, Iain; Regnier, Cindy [LBNL, Environmental Energy Technologies Division; Miller, Jeff; Masanet, Eric

    2018-06-28

    Science at the Theater: Berkeley Lab scientists reveal the latest research on how to reduce your carbon footprint at home, work, and when you shop. Learn how even small choices can have a big impact. Iain Walker's research focuses on optimizing the energy use and comfort of buildings. He's a staff scientist in the Energy Performance of Buildings Group, which is part of Berkeley Lab's Environmen...tal Energy Technologies Division. He's also executive editor of Home Energy Magazine. Cindy Regnier is a Project Manager in the Environmental Energy Technologies Division at Berkeley Lab. She has over 13 years of mechanical engineering design experience, with a focus on low-energy buildings. Her projects have included several LEED Platinum buildings and the design of a 200,000 sf carbon neutral, net-zero energy science museum in San Francisco. Eric Masanet is Acting Deputy Leader of the International Energy Studies Group at Berkeley Lab. His research focuses on life-cycle assessments and energy efficiency analysis. He holds a joint research appointment in the Institute of Transportation Studies at UC Berkeley.

  4. No ``explosion'' in Big Bang cosmology: teaching kids the truth of what cosmologists really know

    NASA Astrophysics Data System (ADS)

    Gangui, Alejandro

    2011-06-01

    Common wisdom says that cosmologists are smart: they have developed a theory that can explain the ``origin of the universe''. Every time an astro-related, heavily funded ``big-science'' project comes to the media, naturally the question arises: will science -through this or that experiment- explain the origin of the cosmos? Can this be done with the LHC, for example? Will this dream machine create other universes? Of course, the very words we employ in cosmology reinforce this misconception: so Big Bang must be associated with an ``explosion'', even if a ``peculiar'' one, as it took place nowhere (there was presumably no space before the beginning) and happened virtually in no time (supposedly, space-time was created on this peculiar -singular- event). Right, the issue sounds confusing. Let us imagine what kids may get out of all this. We have recently presented a series of brief astronomy and cosmology books aimed at helping both kids and their teachers in these and other arcane subjects, all introduced with carefully chosen words and images that young children can understand. In particular, Volume Four deals with the Big Bang and emphasizes the notion of ``evolution'' as opposed to the -wrong- notion of ``origin'' behind the scientific model. We then explain some of the pillars of Big Bang cosmology: the expansion of space that drags away distant galaxies, as seen in the redshift of their emitted light; the build-up of light elements in a cooling bath of radiation, as explained by primordial nucleosynthesis; and the existence and main features of the ubiquitous cosmic microwave background radiation, where theory and observations agree to a highly satisfactory degree. Of course, one cannot attempt to answer the ``origins'' question when it is well known that all theories so far break down close to this origin (if there was actually an origin). It is through observations, analyses, lively discussions and recognition of the basic limitations of current theories and ideas, that we are led to try and reconstruct the past and predict the future evolution of our universe. Just that. Sound science turns out to be much more attractive when we tell the truth of what we really know.

  5. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    NASA Astrophysics Data System (ADS)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on https://github.com/planetserver References: Baumann, P., et al. (2015) Big Data Analytics for Earth Sciences: the EarthServer approach, International Journal of Digital Earth, doi: 10.1080/17538947.2014.1003106. Cantini, F. et al. (2014) Geophys. Res. Abs., Vol. 16, #EGU2014-3784. Gaddis, L., and T. Hare (2015), Status of tools and data for planetary research, Eos, 96, dos: 10.1029/2015EO041125. Hogan, P., 2011. NASA World Wind: Infrastructure for Spatial Data. Technical report. Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. Oosthoek, J.H.P, et al. (2013) Advances in Space Research. doi: 10.1016/j.asr.2013.07.002. Rossi, A. P., et al. (2014) PlanetServer/EarthServer: Big Data analytics in Planetary Science. Geophysical Research Abstracts, Vol. 16, #EGU2014-5149.

  6. Examining the Big-Fish-Little-Pond Effect on Students' Self-Concept of Learning Science in Taiwan Based on the TIMSS Databases

    ERIC Educational Resources Information Center

    Liou, Pey-Yan

    2014-01-01

    The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the…

  7. A Proposed Concentration Curriculum Design for Big Data Analytics for Information Systems Students

    ERIC Educational Resources Information Center

    Molluzzo, John C.; Lawler, James P.

    2015-01-01

    Big Data is becoming a critical component of the Information Systems curriculum. Educators are enhancing gradually the concentration curriculum for Big Data in schools of computer science and information systems. This paper proposes a creative curriculum design for Big Data Analytics for a program at a major metropolitan university. The design…

  8. Quality and impact assessment in new geoscience communication : future perspectives through digital communication and Big Data exploration techniques

    NASA Astrophysics Data System (ADS)

    Vicari, Rosa; Schertzer, Daniel; Deutsch, Jean-Claude; Moilleron, Regis

    2015-04-01

    Since 1990s up to now, climate and environmental science communication has gradually become a priority of policy programmes, a consolidated subject of training and education, a developed and greatly expanded field of professional practices. However, in contrast to this very fast evolution there is presumably a deficit in terms of research and reflection on objective tools to assess the quality and impact of communication activities. The quality of communication in the field of science has become more and more challenging due to the fact that the role of traditional mediators (e.g. well reputed newspapers or broadcasters, science museums), that used to be considered quality guarantors, has now become marginal. Today, a new generation of communication professionals tend to be employed by research institutes to respond to a stronger request to develop accountable research projects, to increase transparency and trust and to disseminate and implementation of research findings. This research aims to understand how communication strategies, addressed to the general public, can optimise the impact of research findings in hydrology for resilient cities. The research will greatly benefit from the development of automated analysis of unstructured Big Data that allows the exploration of huge amounts of digital communication data: blogs, social networks postings, public speeches, press releases, publications, articles... Furthermore, these techniques facilitate the crossing of socio-economic and physical-environmental data and possibly lead to the identification of existing correlations. Case studies correspond to those of several research projects under the umbrella of the Chair "Hydrology for resilient cities" aimed to develop and test new solutions in urban hydrology that will contribute to the resilience of our cities to extreme weather. This research was initiated in the framework of the Interreg IVB project RAINGAIN and pursued in the project Blue Green Dream of the EU KIC Climate and in worldwide collaborations (e.g. TOMACS). These projects involve awareness raising and capacity building activities aimed to stimulate cooperation between scientists, professionals (e.g. water managers, urban planners) and beneficiaries (e.g. concerned citizens, policy makers). They give credence to the fact that the key question is not if geoscientists can act communicators, but how to develop synergies with various actors of geoscience communication with the help of an enlargement of their scientific practices, rather than a detrimental reduction of them.

  9. Perspective: Materials informatics and big data: Realization of the "fourth paradigm" of science in materials science

    NASA Astrophysics Data System (ADS)

    Agrawal, Ankit; Choudhary, Alok

    2016-05-01

    Our ability to collect "big data" has greatly surpassed our capability to analyze it, underscoring the emergence of the fourth paradigm of science, which is data-driven discovery. The need for data informatics is also emphasized by the Materials Genome Initiative (MGI), further boosting the emerging field of materials informatics. In this article, we look at how data-driven techniques are playing a big role in deciphering processing-structure-property-performance relationships in materials, with illustrative examples of both forward models (property prediction) and inverse models (materials discovery). Such analytics can significantly reduce time-to-insight and accelerate cost-effective materials discovery, which is the goal of MGI.

  10. To See the Unseen: A History of Planetary Radar Astronomy

    NASA Technical Reports Server (NTRS)

    Butrica, Andrew J.

    1996-01-01

    This book relates the history of planetary radar astronomy from its origins in radar to the present day and secondarily to bring to light that history as a case of 'Big Equipment but not Big Science'. Chapter One sketches the emergence of radar astronomy as an ongoing scientific activity at Jodrell Bank, where radar research revealed that meteors were part of the solar system. The chief Big Science driving early radar astronomy experiments was ionospheric research. Chapter Two links the Cold War and the Space Race to the first radar experiments attempted on planetary targets, while recounting the initial achievements of planetary radar, namely, the refinement of the astronomical unit and the rotational rate and direction of Venus. Chapter Three discusses early attempts to organize radar astronomy and the efforts at MIT's Lincoln Laboratory, in conjunction with Harvard radio astronomers, to acquire antenna time unfettered by military priorities. Here, the chief Big Science influencing the development of planetary radar astronomy was radio astronomy. Chapter Four spotlights the evolution of planetary radar astronomy at the Jet Propulsion Laboratory, a NASA facility, at Cornell University's Arecibo Observatory, and at Jodrell Bank. A congeries of funding from the military, the National Science Foundation, and finally NASA marked that evolution, which culminated in planetary radar astronomy finding a single Big Science patron, NASA. Chapter Five analyzes planetary radar astronomy as a science using the theoretical framework provided by philosopher of science Thomas Kuhn. Chapter Six explores the shift in planetary radar astronomy beginning in the 1970s that resulted from its financial and institutional relationship with NASA Big Science. Chapter Seven addresses the Magellan mission and its relation to the evolution of planetary radar astronomy from a ground-based to a space-based activity. Chapters Eight and Nine discuss the research carried out at ground-based facilities by this transformed planetary radar astronomy, as well as the upgrading of the Arecibo and Goldstone radars. A technical essay appended to this book provides an overview of planetary radar techniques, especially range-Doppler mapping.

  11. Science versus (?) Art: Human Perception of Other Worlds

    NASA Astrophysics Data System (ADS)

    Hartmann, William K.

    1998-09-01

    At the time of the Renaissance, science and art were mixed together as a way to understand the human relation to the larger cosmos. Leonardo da Vinci exemplifies this approach. In modern times, the two have become separate, and even antagonistic, ``two cultures." Scientists have increasingly been satisfied to present quantitative measures of phenomena, without ever asking what the measures mean in human terms. Examples include the nature of the lunar surface, asteroid colors and brightness of the Io aurora, as will be discussed. However, in presenting the "big picture" to the public, and even to other working scientists, it is useful to revisit the Renaissance paradigm. Artists are increasingly working with scientists to translate the understanding of other worlds to the public, and this creates many opportunities for education projects in schools, and for careers in public outreach and science journalism.

  12. Earth Science Data Analysis in the Era of Big Data

    NASA Technical Reports Server (NTRS)

    Kuo, K.-S.; Clune, T. L.; Ramachandran, R.

    2014-01-01

    Anyone with even a cursory interest in information technology cannot help but recognize that "Big Data" is one of the most fashionable catchphrases of late. From accurate voice and facial recognition, language translation, and airfare prediction and comparison, to monitoring the real-time spread of flu, Big Data techniques have been applied to many seemingly intractable problems with spectacular successes. They appear to be a rewarding way to approach many currently unsolved problems. Few fields of research can claim a longer history with problems involving voluminous data than Earth science. The problems we are facing today with our Earth's future are more complex and carry potentially graver consequences than the examples given above. How has our climate changed? Beside natural variations, what is causing these changes? What are the processes involved and through what mechanisms are these connected? How will they impact life as we know it? In attempts to answer these questions, we have resorted to observations and numerical simulations with ever-finer resolutions, which continue to feed the "data deluge." Plausibly, many Earth scientists are wondering: How will Big Data technologies benefit Earth science research? As an example from the global water cycle, one subdomain among many in Earth science, how would these technologies accelerate the analysis of decades of global precipitation to ascertain the changes in its characteristics, to validate these changes in predictive climate models, and to infer the implications of these changes to ecosystems, economies, and public health? Earth science researchers need a viable way to harness the power of Big Data technologies to analyze large volumes and varieties of data with velocity and veracity. Beyond providing speedy data analysis capabilities, Big Data technologies can also play a crucial, albeit indirect, role in boosting scientific productivity by facilitating effective collaboration within an analysis environment. To illustrate the effects of combining a Big Data technology with an effective means of collaboration, we relate the (fictitious) experience of an early-career Earth science researcher a few years beyond the present, interlaced and contrasted with reminiscences of its recent past (i.e., the present).

  13. Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science

    ERIC Educational Resources Information Center

    Williamson, Ben

    2017-01-01

    "Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…

  14. Big Data: Philosophy, Emergence, Crowdledge, and Science Education

    ERIC Educational Resources Information Center

    dos Santos, Renato P.

    2015-01-01

    Big Data already passed out of hype, is now a field that deserves serious academic investigation, and natural scientists should also become familiar with Analytics. On the other hand, there is little empirical evidence that any science taught in school is helping people to lead happier, more prosperous, or more politically well-informed lives. In…

  15. Discourse, Power, and Knowledge in the Management of "Big Science": The Production of Consensus in a Nuclear Fusion Research Laboratory.

    ERIC Educational Resources Information Center

    Kinsella, William J.

    1999-01-01

    Extends a Foucauldian view of power/knowledge to the archetypical knowledge-intensive organization, the scientific research laboratory. Describes the discursive production of power/knowledge at the "big science" laboratory conducting nuclear fusion research and illuminates a critical incident in which the fusion research…

  16. Big Images and Big Ideas!

    ERIC Educational Resources Information Center

    McCullagh, John; Greenwood, Julian

    2011-01-01

    In this digital age, is primary science being left behind? Computer microscopes provide opportunities to transform science lessons into highly exciting learning experiences and to shift enquiry and discovery back into the hands of the children. A class of 5- and 6-year-olds was just one group of children involved in the Digitally Resourced…

  17. A Guided Inquiry on Hubble Plots and the Big Bang

    ERIC Educational Resources Information Center

    Forringer, Ted

    2014-01-01

    In our science for non-science majors course "21st Century Physics," we investigate modern "Hubble plots" (plots of velocity versus distance for deep space objects) in order to discuss the Big Bang, dark matter, and dark energy. There are two potential challenges that our students face when encountering these topics for the…

  18. On gestation periods of creative work: an interface of Doig's art and science.

    PubMed

    Erren, Thomas C

    2010-01-01

    This article is meant for, but not confined to, younger scientists who may have a series of ideas, hypotheses and projects--be they small or big--and might grapple with the objective to pursue and complete at least some, and preferably most, work in due course. And yet, the very generation, development and completion of numerous projects takes gestation periods which can be long and painful. Importantly, this simple but important truth is valid for any creative process, be it in the sciences or in the arts. With reference to luminaries like Max Perutz and George Wald, more general interfaces between science and the arts are identified. With reference to how some of Peter Doig's paintings evolve over long times and to how John Eccles and Isaac Newton worked, extended gestation periods as a key similarity of creative work by both artists and scientists are exemplified and vindicated. It is concluded that long gestation periods of creative work should be viewed as the expectation rather than the exception. Importantly, the evolutionary and somewhat intuitive commitment to several projects at the same, and often extended, periods of time can be a recipe for revolutionary results fostered by the required variation and diversity of thinking and cross-fertilization of--seemingly--unrelated themes and fields.

  19. Documenting genomics: Applying archival theory to preserving the records of the Human Genome Project.

    PubMed

    Shaw, Jennifer

    2016-02-01

    The Human Genome Archive Project (HGAP) aimed to preserve the documentary heritage of the UK's contribution to the Human Genome Project (HGP) by using archival theory to develop a suitable methodology for capturing the results of modern, collaborative science. After assessing past projects and different archival theories, the HGAP used an approach based on the theory of documentation strategy to try to capture the records of a scientific project that had an influence beyond the purely scientific sphere. The HGAP was an archival survey that ran for two years. It led to ninety scientists being contacted and has, so far, led to six collections being deposited in the Wellcome Library, with additional collections being deposited in other UK repositories. In applying documentation strategy the HGAP was attempting to move away from traditional archival approaches to science, which have generally focused on retired Nobel Prize winners. It has been partially successful in this aim, having managed to secure collections from people who are not 'big names', but who made an important contribution to the HGP. However, the attempt to redress the gender imbalance in scientific collections and to improve record-keeping in scientific organisations has continued to be difficult to achieve. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  20. Documenting genomics: Applying archival theory to preserving the records of the Human Genome Project

    PubMed Central

    Shaw, Jennifer

    2016-01-01

    The Human Genome Archive Project (HGAP) aimed to preserve the documentary heritage of the UK's contribution to the Human Genome Project (HGP) by using archival theory to develop a suitable methodology for capturing the results of modern, collaborative science. After assessing past projects and different archival theories, the HGAP used an approach based on the theory of documentation strategy to try to capture the records of a scientific project that had an influence beyond the purely scientific sphere. The HGAP was an archival survey that ran for two years. It led to ninety scientists being contacted and has, so far, led to six collections being deposited in the Wellcome Library, with additional collections being deposited in other UK repositories. In applying documentation strategy the HGAP was attempting to move away from traditional archival approaches to science, which have generally focused on retired Nobel Prize winners. It has been partially successful in this aim, having managed to secure collections from people who are not ‘big names’, but who made an important contribution to the HGP. However, the attempt to redress the gender imbalance in scientific collections and to improve record-keeping in scientific organisations has continued to be difficult to achieve. PMID:26388555

  1. Big data and visual analytics in anaesthesia and health care.

    PubMed

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. The BIG Data Center: from deposition to integration to translation.

    PubMed

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Wide Strip Casting Technology of Magnesium Alloys

    NASA Astrophysics Data System (ADS)

    Park, W.-J.; Kim, J. J.; Kim, I. J.; Choo, D.

    Extensive investigations relating to the production of high performance and low cost magnesium sheet by strip casting have been performed for the application to automotive parts and electronic devices. Research on magnesium sheet production technology started in 2004 by Research Institute of Industrial Science and Technology (RIST) with support of Pohang Iron and Steel Company (POSCO). POSCO has completed the world's first plant to manufacture magnesium coil. Another big project in order to develop wide strip casting technology for the automotive applications of magnesium sheets was started in succession.

  4. AirMSPI PODEX BigSur Terrain Images

    Atmospheric Science Data Center

    2013-12-13

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target (Big Sur, California) 02/03/2013 Terrain-projected   Select ...   Version number   For more information, see the Data Product Specifications (DPS)   ...

  5. Riding the Hype Wave: Evaluating new AI Techniques for their Applicability in Earth Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Zhang, J.; Maskey, M.; Lee, T. J.

    2016-12-01

    Every few years a new technology rides the hype wave generated by the computer science community. Converts to this new technology who surface from both the science community and the informatics community promulgate that it can radically improve or even change the existing scientific process. Recent examples of new technology following in the footsteps of "big data" now include deep learning algorithms and knowledge graphs. Deep learning algorithms mimic the human brain and process information through multiple stages of transformation and representation. These algorithms are able to learn complex functions that map pixels directly to outputs without relying on human-crafted features and solve some of the complex classification problems that exist in science. Similarly, knowledge graphs aggregate information around defined topics that enable users to resolve their query without having to navigate and assemble information manually. Knowledge graphs could potentially be used in scientific research to assist in hypothesis formulation, testing, and review. The challenge for the Earth science research community is to evaluate these new technologies by asking the right questions and considering what-if scenarios. What is this new technology enabling/providing that is innovative and different? Can one justify the adoption costs with respect to the research returns? Since nothing comes for free, utilizing a new technology entails adoption costs that may outweigh the benefits. Furthermore, these technologies may require significant computing infrastructure in order to be utilized effectively. Results from two different projects will be presented along with lessons learned from testing these technologies. The first project primarily evaluates deep learning techniques for different applications of image retrieval within Earth science while the second project builds a prototype knowledge graph constructed for Hurricane science.

  6. The concept lens diagram: a new mechanism for presenting biochemistry content in terms of "big ideas".

    PubMed

    Rowland, Susan L; Smith, Christopher A; Gillam, Elizabeth M A; Wright, Tony

    2011-07-01

    A strong, recent movement in tertiary education is the development of conceptual, or "big idea" teaching. The emphasis in course design is now on promoting key understandings, core competencies, and an understanding of connections between different fields. In biochemistry teaching, this radical shift from the content-based tradition is being driven by the "omics" information explosion; we can no longer teach all the information we have available. Biochemistry is a core, enabling discipline for much of modern scientific research, and biochemistry teaching is in urgent need of a method for delivery of conceptual frameworks. In this project, we aimed to define the key concepts in biochemistry. We find that the key concepts we defined map well onto the core science concepts recommended by the Vision and Change project. We developed a new method to present biochemistry through the lenses of these concepts. This new method challenged the way we thought about biochemistry as teachers. It also stimulated the majority of the students to think more deeply about biochemistry and to make links between biochemistry and material in other courses. This method is applicable to the full spectrum of content usually taught in biochemistry. Copyright © 2011 Wiley Periodicals, Inc.

  7. KnowEnG: a knowledge engine for genomics.

    PubMed

    Sinha, Saurabh; Song, Jun; Weinshilboum, Richard; Jongeneel, Victor; Han, Jiawei

    2015-11-01

    We describe here the vision, motivations, and research plans of the National Institutes of Health Center for Excellence in Big Data Computing at the University of Illinois, Urbana-Champaign. The Center is organized around the construction of "Knowledge Engine for Genomics" (KnowEnG), an E-science framework for genomics where biomedical scientists will have access to powerful methods of data mining, network mining, and machine learning to extract knowledge out of genomics data. The scientist will come to KnowEnG with their own data sets in the form of spreadsheets and ask KnowEnG to analyze those data sets in the light of a massive knowledge base of community data sets called the "Knowledge Network" that will be at the heart of the system. The Center is undertaking discovery projects aimed at testing the utility of KnowEnG for transforming big data to knowledge. These projects span a broad range of biological enquiry, from pharmacogenomics (in collaboration with Mayo Clinic) to transcriptomics of human behavior. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Communicating the Nature of Science through "The Big Bang Theory": Evidence from a Focus Group Study

    ERIC Educational Resources Information Center

    Li, Rashel; Orthia, Lindy A.

    2016-01-01

    In this paper, we discuss a little-studied means of communicating about or teaching the nature of science (NOS)--through fiction television. We report some results of focus group research which suggest that the American sitcom "The Big Bang Theory" (2007-present), whose main characters are mostly working scientists, has influenced…

  9. A Big Bang Lab

    ERIC Educational Resources Information Center

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  10. Where do the Field Plots Belong? A Multiple-Constraint Sampling Design for the BigFoot Project

    NASA Astrophysics Data System (ADS)

    Kennedy, R. E.; Cohen, W. B.; Kirschbaum, A. A.; Gower, S. T.

    2002-12-01

    A key component of a MODIS validation project is effective characterization of biophysical measures on the ground. Fine-grain ecological field measurements must be placed strategically to capture variability at the scale of the MODIS imagery. Here we describe the BigFoot project's revised sampling scheme, designed to simultaneously meet three important goals: capture landscape variability, avoid spatial autocorrelation between field plots, and minimize time and expense of field sampling. A stochastic process places plots in clumped constellations to reduce field sampling costs, while minimizing spatial autocorrelation. This stochastic process is repeated, creating several hundred realizations of plot constellations. Each constellation is scored and ranked according to its ability to match landscape variability in several Landsat-based spectral indices, and its ability to minimize field sampling costs. We show how this approach has recently been used to place sample plots at the BigFoot project's two newest study areas, one in a desert system and one in a tundra system. We also contrast this sampling approach to that already used at the four prior BigFoot project sites.

  11. Water Exploration: An Online High School Water Resource Education Program

    NASA Astrophysics Data System (ADS)

    Ellins, K. K.; McCall, L. R.; Amos, S.; McGowan, R. F.; Mote, A.; Negrito, K.; Paloski, B.; Ryan, C.; Cameron, B.

    2010-12-01

    The Institute for Geophysics at The University of Texas at Austin and 4empowerment.com, a Texas-based for-profit educational enterprise, teamed up with the Texas Water Development Board to develop and implement a Web-based water resources education program for Texas high school students. The program, Water Exploration uses a project-based learning approach called the Legacy Cycle model to permit students to conduct research and build an understanding about water science and critical water-related issues, using the Internet and computer technology. The three Legacy Cycle modules in the Water Exploration curriculum are: Water Basics, Water-Earth Dynamics and People Need Water. Within each Legacy Cycle there are three different challenges, or instructional modules, laid out as projects with clearly stated goals for students to carry out. Each challenge address themes that map to the water-related “Big Ideas” and supporting concepts found in the new Earth Science Literacy Principles: The Big Ideas and Supporting Concepts of Earth Science. As students work through a challenge they follow a series of steps, each of which is associated (i.e., linked online) with a manageable number of corresponding, high quality, research-based learning activities and Internet resources, including scholarly articles, cyber tools, and visualizations intended to enhance understanding of the concepts presented. The culmination of each challenge is a set of “Go Public” products that are the students’ answers to the challenge and which serve as the final assessment for the challenge. The “Go Public” products are posted to a collaborative workspace on the Internet as the “legacy” of the students’ work, thereby allowing subsequent groups of students who take the challenge to add new products. Twenty-two science educators have been trained on the implementation of the Water Exploration curriculum. A graduate student pursuing a master’s degree in science education through The University of Texas’ UTEACH program is conducting research to track the teachers’ implementation of Water Exploration and assess their comfort with cyber-education through classroom observations, students and teacher surveys, and evaluation of students’ “Go Public” products.

  12. Big Data and Perioperative Nursing.

    PubMed

    Westra, Bonnie L; Peterson, Jessica J

    2016-10-01

    Big data are large volumes of digital data that can be collected from disparate sources and are challenging to analyze. These data are often described with the five "Vs": volume, velocity, variety, veracity, and value. Perioperative nurses contribute to big data through documentation in the electronic health record during routine surgical care, and these data have implications for clinical decision making, administrative decisions, quality improvement, and big data science. This article explores methods to improve the quality of perioperative nursing data and provides examples of how these data can be combined with broader nursing data for quality improvement. We also discuss a national action plan for nursing knowledge and big data science and how perioperative nurses can engage in collaborative actions to transform health care. Standardized perioperative nursing data has the potential to affect care far beyond the original patient. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain; Regnier, Cindy

    Science at the Theater: Berkeley Lab scientists reveal the latest research on how to reduce your carbon footprint at home, work, and when you shop. Learn how even small choices can have a big impact. Iain Walker's research focuses on optimizing the energy use and comfort of buildings. He's a staff scientist in the Energy Performance of Buildings Group, which is part of Berkeley Lab's Environmen...tal Energy Technologies Division. He's also executive editor of Home Energy Magazine. Cindy Regnier is a Project Manager in the Environmental Energy Technologies Division at Berkeley Lab. She has over 13 years of mechanical engineeringmore » design experience, with a focus on low-energy buildings. Her projects have included several LEED Platinum buildings and the design of a 200,000 sf carbon neutral, net-zero energy science museum in San Francisco. Eric Masanet is Acting Deputy Leader of the International Energy Studies Group at Berkeley Lab. His research focuses on life-cycle assessments and energy efficiency analysis. He holds a joint research appointment in the Institute of Transportation Studies at UC Berkeley.« less

  14. ARSENIC REMOVAL FROM DRINKING WATER BY IRON REMOVAL USEPA DEMONSTRATION PROJECT AT BIG SAUK LAKE MOBILE HOME PARK IN SAUK CENTRE, MN. SIX MONTH EVALUATION REPORT

    EPA Science Inventory

    This report documents the activities performed and the results obtained from the first six months of the arsenic removal treatment technology demonstration project at the Big Sauk Lake Mobile Home Park (BSLMHP) in Sauk Centre, MN. The objectives of the project are to evaluate the...

  15. Annual Research Review: Discovery science strategies in studies of the pathophysiology of child and adolescent psychiatric disorders--promises and limitations.

    PubMed

    Zhao, Yihong; Castellanos, F Xavier

    2016-03-01

    Psychiatric science remains descriptive, with a categorical nosology intended to enhance interobserver reliability. Increased awareness of the mismatch between categorical classifications and the complexity of biological systems drives the search for novel frameworks including discovery science in Big Data. In this review, we provide an overview of incipient approaches, primarily focused on classically categorical diagnoses such as schizophrenia (SZ), autism spectrum disorder (ASD), and attention-deficit/hyperactivity disorder (ADHD), but also reference convincing, if focal, advances in cancer biology, to describe the challenges of Big Data and discovery science, and outline approaches being formulated to overcome existing obstacles. A paradigm shift from categorical diagnoses to a domain/structure-based nosology and from linear causal chains to complex causal network models of brain-behavior relationship is ongoing. This (r)evolution involves appreciating the complexity, dimensionality, and heterogeneity of neuropsychiatric data collected from multiple sources ('broad' data) along with data obtained at multiple levels of analysis, ranging from genes to molecules, cells, circuits, and behaviors ('deep' data). Both of these types of Big Data landscapes require the use and development of robust and powerful informatics and statistical approaches. Thus, we describe Big Data analysis pipelines and the promise and potential limitations in using Big Data approaches to study psychiatric disorders. We highlight key resources available for psychopathological studies and call for the application and development of Big Data approaches to dissect the causes and mechanisms of neuropsychiatric disorders and identify corresponding biomarkers for early diagnosis. © 2016 Association for Child and Adolescent Mental Health.

  16. Annual Research Review: Discovery science strategies in studies of the pathophysiology of child and adolescent psychiatric disorders: promises and limitations

    PubMed Central

    Zhao, Yihong; Castellanos, F. Xavier

    2015-01-01

    Background and Scope Psychiatric science remains descriptive, with a categorical nosology intended to enhance inter-observer reliability. Increased awareness of the mismatch between categorical classifications and the complexity of biological systems drives the search for novel frameworks including discovery science in Big Data. In this review, we provide an overview of incipient approaches, primarily focused on classically categorical diagnoses such as schizophrenia (SZ), autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD), but also reference convincing, if focal, advances in cancer biology, to describe the challenges of Big Data and discovery science, and outline approaches being formulated to overcome existing obstacles. Findings A paradigm shift from categorical diagnoses to a domain/structure-based nosology and from linear causal chains to complex causal network models of brain-behavior relationship is ongoing. This (r)evolution involves appreciating the complexity, dimensionality and heterogeneity of neuropsychiatric data collected from multiple sources (“broad” data) along with data obtained at multiple levels of analysis, ranging from genes to molecules, cells, circuits and behaviors (“deep” data). Both of these types of Big Data landscapes require the use and development of robust and powerful informatics and statistical approaches. Thus, we describe Big Data analysis pipelines and the promise and potential limitations in using Big Data approaches to study psychiatric disorders. Conclusion We highlight key resources available for psychopathological studies and call for the application and development of Big Data approaches to dissect the causes and mechanisms of neuropsychiatric disorders and identify corresponding biomarkers for early diagnosis. PMID:26732133

  17. A Big Data Analytics Methodology Program in the Health Sector

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  18. Integrating Climate Change Science and Sustainability in Environmental Science, Sociology, Philosophy and Business Courses.

    NASA Astrophysics Data System (ADS)

    Boudrias, M. A.; Cantzler, J.; Croom, S.; Huston, C.; Woods, M.

    2015-12-01

    Courses on sustainability can be taught from multiple perspectives with some focused on specific areas (environmental, socio-cultural, economic, ethics) and others taking a more integrated approach across areas of sustainability and academic disciplines. In conjunction with the Climate Change Education Program efforts to enhance climate change literacy with innovative approaches, resources and communication strategies developed by Climate Education Partners were used in two distinct ways to integrate climate change science and impacts into undergraduate and graduate level courses. At the graduate level, the first lecture in the MBA program in Sustainable Supply Chain Management is entirely dedicated to climate change science, local and global impacts and discussions about key messages to communicate to the business community. Basic science concepts are integrated with discussions about mitigation and adaptation focused on business leaders. The concepts learned are then applied to the semester-long business plan project for the students. At the undergraduate level, a new model of comprehensive integration across disciplines was implemented in Spring 2015 across three courses on Sustainability each with a specific lens: Natural Science, Sociology and Philosophy. All three courses used climate change as the 'big picture' framing concept and had similar learning objectives creating a framework where lens-specific topics, focusing on depth in a discipline, were balanced with integrated exercises across disciplines providing breadth and possibilities for integration. The comprehensive integration project was the creation of the climate action plan for the university with each team focused on key areas of action (water, energy, transportation, etc.) and each team built with at least one member from each class ensuring a natural science, sociological and philosophical perspective. The final project was presented orally to all three classes and an integrated paper included all three perspectives. The best projects are being compiled so they can be shared with the University of San Diego's planning committee.

  19. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry.

    PubMed

    Tetko, Igor V; Engkvist, Ola; Koch, Uwe; Reymond, Jean-Louis; Chen, Hongming

    2016-12-01

    The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for "Big Data" in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the "Big Data" using advanced machine-learning methods, and their applications in polypharmacology prediction and target de-convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi-party or multi-party data sharing. Data sharing is important in the context of the recent trend of "open innovation" in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so-called "precompetitive" collaboration between pharma companies. At the end we highlight the importance of education in "Big Data" for further progress of this area. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  20. pvsR: An Open Source Interface to Big Data on the American Political Sphere.

    PubMed

    Matter, Ulrich; Stutzer, Alois

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS' data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS' new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases.

  1. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry

    PubMed Central

    Engkvist, Ola; Koch, Uwe; Reymond, Jean‐Louis; Chen, Hongming

    2016-01-01

    Abstract The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for “Big Data” in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the “Big Data” using advanced machine‐learning methods, and their applications in polypharmacology prediction and target de‐convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi‐party or multi‐party data sharing. Data sharing is important in the context of the recent trend of “open innovation” in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so‐called “precompetitive” collaboration between pharma companies. At the end we highlight the importance of education in “Big Data” for further progress of this area. PMID:27464907

  2. Higher-order Traits and Happiness in the Workplace: The Importance of Occupational Project Scale for the Evaluation of Characteristic Adaptations.

    PubMed

    Buruk, Pelin; Şimşek, Ömer Faruk; Kocayörük, Ercan

    2017-01-01

    This study attempts to explain the relationship between job satisfaction and the Big Two, Stability and Plasticity, which are the higher-order traits of Big Five. Occupational Project, a narrative construct, was considered a mediator variable in this relationship. Occupational Project consists of affective and cognitive evaluations of an individual's work life as a project in terms of the completed (past), the ongoing (present) and the prospective (future) parts. The survey method was applied to a sample of 253 participants. The results supported the proposed model, in which Occupational Project mediated the relationship between the Big Two and both job satisfaction and affect in workplace. Discussion is focused on applying Occupational Project as a practical tool for management. Consideration of an employee's Occupational Project could provide management with a means to question, understand, intervene with and redefine the narrative quality of his/her occupational project that influences job satisfaction.

  3. 76 FR 6775 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-08

    ... Services Corporation; Fowler Ridge II Wind Farm LLC. Description: Notice of Non-Material Change in Status... Open Access Transmission Tariff, a Small Generator Interconnection Agreement Facilities Maintenance..., Inc.; Atlantic Renewable Projects II LLC; Barton Windpower LLC; Big Horn Wind Project LLC; Big Horn II...

  4. "Big" versus "little" science: comparative analysis of program projects and individual research grants.

    PubMed

    Baumeister, A A; Bacharach, V R; Baumeister, A A

    1997-11-01

    Controversy about the amount and nature of funding for mental retardation research has persisted since the creation of NICHD. An issue that has aroused considerable debate, within the mental retardation research community as well as beyond, is distribution of funds between large group research grants, such as the program project (PO1) and the individual grant (RO1). Currently within the Mental Retardation and Developmental Disabilities Branch, more money is allocated to the PO1 mechanism than the RO1. We compared the two types of grants, focusing on success rates, productivity, costs, impact, publication practices, and outcome and conducted a comparative analysis of biomedical and behavioral research. Other related issues were considered, including review processes and cost-effectiveness.

  5. Small Particles, Big Science: The International LBNF/DUNE Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Neutrinos are the most abundant matter particles in the universe, yet very little is known about them. This animation shows how the Department of Energy’s Long-Baseline Neutrino Facility will power the Deep Underground Neutrino Experiment to help scientists understand the role neutrinos play in the universe. DUNE will also look for the birth of neutron stars and black holes by catching neutrinos from exploding stars. More than 800 scientists from 150 institutions in 27 countries are working on the LBNF/DUNE project, including Armenia, Belgium, Brazil, Bulgaria, Canada, Colombia, Czech Republic, Finland, France, Greece, India, Iran, Italy, Japan, Madagascar, Mexico, Netherlands,more » Peru, Poland, Romania, Russia, Spain, Switzerland, Turkey, Ukraine, United Kingdom, USA.« less

  6. Big Data: You Are Adding to . . . and Using It

    ERIC Educational Resources Information Center

    Makela, Carole J.

    2016-01-01

    "Big data" prompts a whole lexicon of terms--data flow; analytics; data mining; data science; smart you name it (cars, houses, cities, wearables, etc.); algorithms; learning analytics; predictive analytics; data aggregation; data dashboards; digital tracks; and big data brokers. New terms are being coined frequently. Are we paying…

  7. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Data issues in the life sciences.

    PubMed

    Thessen, Anne E; Patterson, David J

    2011-01-01

    We review technical and sociological issues facing the Life Sciences as they transform into more data-centric disciplines - the "Big New Biology". Three major challenges are: 1) lack of comprehensive standards; 2) lack of incentives for individual scientists to share data; 3) lack of appropriate infrastructure and support. Technological advances with standards, bandwidth, distributed computing, exemplar successes, and a strong presence in the emerging world of Linked Open Data are sufficient to conclude that technical issues will be overcome in the foreseeable future. While motivated to have a shared open infrastructure and data pool, and pressured by funding agencies in move in this direction, the sociological issues determine progress. Major sociological issues include our lack of understanding of the heterogeneous data cultures within Life Sciences, and the impediments to progress include a lack of incentives to build appropriate infrastructures into projects and institutions or to encourage scientists to make data openly available.

  9. Data issues in the life sciences

    PubMed Central

    Thessen, Anne E.; Patterson, David J.

    2011-01-01

    Abstract We review technical and sociological issues facing the Life Sciences as they transform into more data-centric disciplines - the “Big New Biology”. Three major challenges are: 1) lack of comprehensive standards; 2) lack of incentives for individual scientists to share data; 3) lack of appropriate infrastructure and support. Technological advances with standards, bandwidth, distributed computing, exemplar successes, and a strong presence in the emerging world of Linked Open Data are sufficient to conclude that technical issues will be overcome in the foreseeable future. While motivated to have a shared open infrastructure and data pool, and pressured by funding agencies in move in this direction, the sociological issues determine progress. Major sociological issues include our lack of understanding of the heterogeneous data cultures within Life Sciences, and the impediments to progress include a lack of incentives to build appropriate infrastructures into projects and institutions or to encourage scientists to make data openly available. PMID:22207805

  10. Big data and clinicians: a review on the state of the science.

    PubMed

    Wang, Weiqi; Krishnan, Eswar

    2014-01-17

    In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data.

  11. Arsenic Removal from Drinking Water by Iron Removal - U.S. EPA Demonstration Project at Big Sauk Lake Mobile Home Park in Sauk Centre, MN Final Performance Evaluation Report

    EPA Science Inventory

    This report documents the activities performed and the results obtained from the one-year arsenic removal treatment technology demonstration project at the Big Sauk Lake Mobile Home Park (BSLMHP) in Sauk Centre, MN. The objectives of the project are to evaluate (1) the effective...

  12. [Big data from clinical routine].

    PubMed

    Mansmann, U

    2018-04-01

    Over the past 100 years, evidence-based medicine has undergone several fundamental changes. Through the field of physiology, medical doctors were introduced to the natural sciences. Since the late 1940s, randomized and epidemiological studies have come to provide the evidence for medical practice, which led to the emergence of clinical epidemiology as a new field in the medical sciences. Within the past few years, big data has become the driving force behind the vision for having a comprehensive set of health-related data which tracks individual healthcare histories and consequently that of large populations. The aim of this article is to discuss the implications of data-driven medicine, and to examine how it can find a place within clinical care. The EU-wide discussion on the development of data-driven medicine is presented. The following features and suggested actions were identified: harmonizing data formats, data processing and analysis, data exchange, related legal frameworks and ethical challenges. For the effective development of data-driven medicine, pilot projects need to be conducted to allow for open and transparent discussion on the advantages and challenges. The Federal Ministry of Education and Research ("Bundesministerium für Bildung und Forschung," BMBF) Arthromark project is an important example. Another example is the Medical Informatics Initiative of the BMBF. The digital revolution affects clinic practice. Data can be generated and stored in quantities that are almost unimaginable. It is possible to take advantage of this for development of a learning healthcare system if the principles of medical evidence generation are integrated into innovative IT-infrastructures and processes.

  13. BIG SIOUX RIVER DRAINAGE BASIN INFORMATION OUTREACH PROJECT

    EPA Science Inventory

    The main goal of the proposed project is to raise public awareness about the importance of protecting the Big Sioux River drainage basin. To accomplish this goal, the City and its partnering agencies are seeking to expand and improve public accessibility to a wide variety of r...

  14. Teleconferencing Technology Facilitates Collaboration. Spotlight Feature

    ERIC Educational Resources Information Center

    Dopke-Wilson, MariRae

    2006-01-01

    Big, comprehensive projects involving multiple teachers, components, and electronic media can daunt the most ambitious educator. But for Library Media Specialist Bonnie French, big projects are no problem! A pioneer SOS database contributor, Bonnie can be aptly dubbed the "queen of collaboration." In this article, the author discusses how Bonnie…

  15. Scientific Datasets: Discovery and Aggregation for Semantic Interpretation.

    NASA Astrophysics Data System (ADS)

    Lopez, L. A.; Scott, S.; Khalsa, S. J. S.; Duerr, R.

    2015-12-01

    One of the biggest challenges that interdisciplinary researchers face is finding suitable datasets in order to advance their science; this problem remains consistent across multiple disciplines. A surprising number of scientists, when asked what tool they use for data discovery, reply "Google", which is an acceptable solution in some cases but not even Google can find -or cares to compile- all the data that's relevant for science and particularly geo sciences. If a dataset is not discoverable through a well known search provider it will remain dark data to the scientific world.For the past year, BCube, an EarthCube Building Block project, has been developing, testing and deploying a technology stack capable of data discovery at web-scale using the ultimate dataset: The Internet. This stack has 2 principal components, a web-scale crawling infrastructure and a semantic aggregator. The web-crawler is a modified version of Apache Nutch (the originator of Hadoop and other big data technologies) that has been improved and tailored for data and data service discovery. The second component is semantic aggregation, carried out by a python-based workflow that extracts valuable metadata and stores it in the form of triples through the use semantic technologies.While implementing the BCube stack we have run into several challenges such as a) scaling the project to cover big portions of the Internet at a reasonable cost, b) making sense of very diverse and non-homogeneous data, and lastly, c) extracting facts about these datasets using semantic technologies in order to make them usable for the geosciences community. Despite all these challenges we have proven that we can discover and characterize data that otherwise would have remained in the dark corners of the Internet. Having all this data indexed and 'triplelized' will enable scientists to access a trove of information relevant to their work in a more natural way. An important characteristic of the BCube stack is that all the code we have developed is open sourced and available to anyone who wants to experiment and collaborate with the project at: http://github.com/b-cube/

  16. The faces of Big Science.

    PubMed

    Schatz, Gottfried

    2014-06-01

    Fifty years ago, academic science was a calling with few regulations or financial rewards. Today, it is a huge enterprise confronted by a plethora of bureaucratic and political controls. This change was not triggered by specific events or decisions but reflects the explosive 'knee' in the exponential growth that science has sustained during the past three-and-a-half centuries. Coming to terms with the demands and benefits of 'Big Science' is a major challenge for today's scientific generation. Since its foundation 50 years ago, the European Molecular Biology Organization (EMBO) has been of invaluable help in meeting this challenge.

  17. New to Teaching: Small Changes Can Produce Big Results!

    ERIC Educational Resources Information Center

    Shenton, Megan

    2017-01-01

    In this article, Megan Shenton, a final-year trainee teacher at Nottinghom Trent University, describes using "The Big Question" in her science teaching in a move away from objectives. The Big Question is an innovative pedagogical choice, where instead of implementing a learning objective, a question is posed at the start of the session…

  18. How Does National Scientific Funding Support Emerging Interdisciplinary Research: A Comparison Study of Big Data Research in the US and China

    PubMed Central

    Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L.; Wang, Xuefeng

    2016-01-01

    How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries. PMID:27219466

  19. Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead.

    PubMed

    Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika

    2017-02-01

    Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.

  20. How Does National Scientific Funding Support Emerging Interdisciplinary Research: A Comparison Study of Big Data Research in the US and China.

    PubMed

    Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L; Wang, Xuefeng

    2016-01-01

    How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries.

  1. Climate Data Analytics Workflow Management

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lee, S.; Pan, L.; Mattmann, C. A.; Lee, T. J.

    2016-12-01

    In this project we aim to pave a novel path to create a sustainable building block toward Earth science big data analytics and knowledge sharing. Closely studying how Earth scientists conduct data analytics research in their daily work, we have developed a provenance model to record their activities, and to develop a technology to automatically generate workflows for scientists from the provenance. On top of it, we have built the prototype of a data-centric provenance repository, and establish a PDSW (People, Data, Service, Workflow) knowledge network to support workflow recommendation. To ensure the scalability and performance of the expected recommendation system, we have leveraged the Apache OODT system technology. The community-approved, metrics-based performance evaluation web-service will allow a user to select a metric from the list of several community-approved metrics and to evaluate model performance using the metric as well as the reference dataset. This service will facilitate the use of reference datasets that are generated in support of the model-data intercomparison projects such as Obs4MIPs and Ana4MIPs. The data-centric repository infrastructure will allow us to catch richer provenance to further facilitate knowledge sharing and scientific collaboration in the Earth science community. This project is part of Apache incubator CMDA project.

  2. Mercury Project

    NASA Image and Video Library

    1959-09-01

    An Atlas launch vehicle carrying the Big Joe capsule leaves its launching pad on a 2,000-mile ballistic flight to the altitude of 100 miles. The Big Joe capsule is a boilerplate model of the marned orbital capsule under NASA's Project Mercury. The capsule was recovered and studied for the effect of re-entry heat and other flight stresses.

  3. 77 FR 49775 - Beaverhead-Deerlodge National Forest, Wisdom and Wise River Ranger Districts; Montana; North and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... of the North and West Big Hole Allotment Management Plans (NWBH AMP's) project is the updating of eleven domestic livestock grazing management plans. The scope of the project is limited to the specific... River Ranger Districts; Montana; North and West Big Hole Allotment Management Plans AGENCY: Forest...

  4. Balancing Benefits and Risks of Immortal Data: Participants' Views of Open Consent in the Personal Genome Project.

    PubMed

    Zarate, Oscar A; Brody, Julia Green; Brown, Phil; Ramirez-Andreotta, Mónica D; Perovich, Laura; Matz, Jacob

    2016-01-01

    An individual's health, genetic, or environmental-exposure data, placed in an online repository, creates a valuable shared resource that can accelerate biomedical research and even open opportunities for crowd-sourcing discoveries by members of the public. But these data become "immortalized" in ways that may create lasting risk as well as benefit. Once shared on the Internet, the data are difficult or impossible to redact, and identities may be revealed by a process called data linkage, in which online data sets are matched to each other. Reidentification (re-ID), the process of associating an individual's name with data that were considered deidentified, poses risks such as insurance or employment discrimination, social stigma, and breach of the promises often made in informed-consent documents. At the same time, re-ID poses risks to researchers and indeed to the future of science, should re-ID end up undermining the trust and participation of potential research participants. The ethical challenges of online data sharing are heightened as so-called big data becomes an increasingly important research tool and driver of new research structures. Big data is shifting research to include large numbers of researchers and institutions as well as large numbers of participants providing diverse types of data, so the participants' consent relationship is no longer with a person or even a research institution. In addition, consent is further transformed because big data analysis often begins with descriptive inquiry and generation of a hypothesis, and the research questions cannot be clearly defined at the outset and may be unforeseeable over the long term. In this article, we consider how expanded data sharing poses new challenges, illustrated by genomics and the transition to new models of consent. We draw on the experiences of participants in an open data platform-the Personal Genome Project-to allow study participants to contribute their voices to inform ethical consent practices and protocol reviews for big-data research. © 2015 The Hastings Center.

  5. The Role of Big Data in the Social Sciences

    ERIC Educational Resources Information Center

    Ovadia, Steven

    2013-01-01

    Big Data is an increasingly popular term across scholarly and popular literature but lacks a formal definition (Lohr 2012). This is beneficial in that it keeps the term flexible. For librarians, Big Data represents a few important ideas. One idea is the idea of balancing accessibility with privacy. Librarians tend to want information to be as open…

  6. The Whole Shebang: How Science Produced the Big Bang Model.

    ERIC Educational Resources Information Center

    Ferris, Timothy

    2002-01-01

    Offers an account of the accumulation of evidence that has led scientists to have confidence in the big bang theory of the creation of the universe. Discusses the early work of Ptolemy, Copernicus, Kepler, Galileo, and Newton, noting the rise of astrophysics, and highlighting the birth of the big bang model (the cosmic microwave background theory…

  7. Extragalactic astronomy: The universe beyond our galaxy

    NASA Technical Reports Server (NTRS)

    Jacobs, K. C.

    1976-01-01

    This single-topic brochure is for high school physical science teachers to use in introducing students to extragalactic astronomy. The material is presented in three parts: the fundamental content of extragalactic astronomy; modern discoveries delineated in greater detail; and a summary of the earlier discussions within the structure of the Big-Bang Theory of evolution. Each of the three sections is followed by student exercises (activities, laboratory projects, and questions-and-answers). The unit close with a glossary which explains unfamilar terms used in the text and a collection of teacher aids (literature references and audiovisual materials for utilization in further study).

  8. Agreement on FY 1990 budget plan

    NASA Astrophysics Data System (ADS)

    The Bush administration has reached agreement with congressional leaders over a thumbnail version of the Fiscal Year 1990 budget. The plan contains few details but could have implications for NASA's Space Station Freedom and other big science projects.Overall, the budget agreement would achieve Gramm-Rudman-Hollings targets for budget deficit reduction without raising taxes, mostly through accounting manipulation and unspecified cuts in social programs. But a supplemental bill that calls for $1.2 billion in new spending for FY 1989 is expected to go to the House floor soon. That measure would violate the new agreement and add to the deficit.

  9. Big Sib Students' Perceptions of the Educational Environment at the School of Medical Sciences, Universiti Sains Malaysia, using Dundee Ready Educational Environment Measure (DREEM) Inventory.

    PubMed

    Arzuman, Hafiza; Yusoff, Muhamad Saiful Bahri; Chit, Som Phong

    2010-07-01

    A cross-sectional descriptive study was conducted among Big Sib students to explore their perceptions of the educational environment at the School of Medical Sciences, Universiti Sains Malaysia (USM) and its weak areas using the Dundee Ready Educational Environment Measure (DREEM) inventory. The DREEM inventory is a validated global instrument for measuring educational environments in undergraduate medical and health professional education. The English version of the DREEM inventory was administered to all Year 2 Big Sib students (n = 67) at a regular Big Sib session. The purpose of the study as well as confidentiality and ethical issues were explained to the students before the questionnaire was administered. The response rate was 62.7% (42 out of 67 students). The overall DREEM score was 117.9/200 (SD 14.6). The DREEM indicated that the Big Sib students' perception of educational environment of the medical school was more positive than negative. Nevertheless, the study also revealed some problem areas within the educational environment. This pilot study revealed that Big Sib students perceived a positive learning environment at the School of Medical Sciences, USM. It also identified some low-scored areas that require further exploration to pinpoint the exact problems. The relatively small study population selected from a particular group of students was the major limitation of the study. This small sample size also means that the study findings cannot be generalised.

  10. Making a Big Bang on the small screen

    NASA Astrophysics Data System (ADS)

    Thomas, Nick

    2010-01-01

    While the quality of some TV sitcoms can leave viewers feeling cheated out of 30 minutes of their lives, audiences and critics are raving about the science-themed US comedy The Big Bang Theory. First shown on the CBS network in 2007, the series focuses on two brilliant postdoc physicists, Leonard and Sheldon, who are totally absorbed by science. Adhering to the stereotype, they also share a fanatical interest in science fiction, video-gaming and comic books, but unfortunately lack the social skills required to connect with their 20-something nonacademic contemporaries.

  11. Climbing the Slope of Enlightenment during NASA's Arctic Boreal Vulnerability Experiment

    NASA Astrophysics Data System (ADS)

    Griffith, P. C.; Hoy, E.; Duffy, D.; McInerney, M.

    2015-12-01

    The Arctic Boreal Vulnerability Experiment (ABoVE) is a new field campaign sponsored by NASA's Terrestrial Ecology Program and designed to improve understanding of the vulnerability and resilience of Arctic and boreal social-ecological systems to environmental change (http://above.nasa.gov). ABoVE is integrating field-based studies, modeling, and data from airborne and satellite remote sensing. The NASA Center for Climate Simulation (NCCS) has partnered with the NASA Carbon Cycle and Ecosystems Office (CCEO) to create a high performance science cloud for this field campaign. The ABoVE Science Cloud combines high performance computing with emerging technologies and data management with tools for analyzing and processing geographic information to create an environment specifically designed for large-scale modeling, analysis of remote sensing data, copious disk storage for "big data" with integrated data management, and integration of core variables from in-situ networks. The ABoVE Science Cloud is a collaboration that is accelerating the pace of new Arctic science for researchers participating in the field campaign. Specific examples of the utilization of the ABoVE Science Cloud by several funded projects will be presented.

  12. Big Data and Clinicians: A Review on the State of the Science

    PubMed Central

    Wang, Weiqi

    2014-01-01

    Background In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. Objective The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. Methods We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. Results This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Conclusions Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data. PMID:25600256

  13. Significance of genome-wide association studies in molecular anthropology.

    PubMed

    Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal

    2009-12-01

    The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.

  14. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  15. Big data processing in the cloud - Challenges and platforms

    NASA Astrophysics Data System (ADS)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  16. Psycho-informatics: Big Data shaping modern psychometrics.

    PubMed

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. pvsR: An Open Source Interface to Big Data on the American Political Sphere

    PubMed Central

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS’ data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS’ new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases. PMID:26132154

  18. MiTEP's Collaborative Field Course Design Process Based on Earth Science Literacy Principles

    NASA Astrophysics Data System (ADS)

    Engelmann, C. A.; Rose, W. I.; Huntoon, J. E.; Klawiter, M. F.; Hungwe, K.

    2010-12-01

    Michigan Technological University has developed a collaborative process for designing summer field courses for teachers as part of their National Science Foundation funded Math Science Partnership program, called the Michigan Teacher Excellence Program (MiTEP). This design process was implemented and then piloted during two two-week courses: Earth Science Institute I (ESI I) and Earth Science Institute II (ESI II). Participants consisted of a small group of Michigan urban science teachers who are members of the MiTEP program. The Earth Science Literacy Principles (ESLP) served as the framework for course design in conjunction with input from participating MiTEP teachers as well as research done on common teacher and student misconceptions in Earth Science. Research on the Earth Science misconception component, aligned to the ESLP, is more fully addressed in GSA Abstracts with Programs Vol. 42, No. 5. “Recognizing Earth Science Misconceptions and Reconstructing Knowledge through Conceptual-Change-Teaching”. The ESLP were released to the public in January 2009 by the Earth Science Literacy Organizing Committee and can be found at http://www.earthscienceliteracy.org/index.html. Each day of the first nine days of both Institutes was focused on one of the nine ESLP Big Ideas; the tenth day emphasized integration of concepts across all of the ESLP Big Ideas. Throughout each day, Michigan Tech graduate student facilitators and professors from Michigan Tech and Grand Valley State University consistantly focused teaching and learning on the day's Big Idea. Many Earth Science experts from Michigan Tech and Grand Valley State University joined the MiTEP teachers in the field or on campus, giving presentations on the latest research in their area that was related to that Big Idea. Field sites were chosen for their unique geological features as well as for the “sense of place” each site provided. Preliminary research findings indicate that this collaborative design process piloted as ESI I and ESI II was successful in improving MiTEP teacher understanding of Earth Science content and that it was helpful to use the ESLP framework. Ultimately, a small sample of student scores will look at the impact on student learning in the MiTEP teacher classrooms.

  19. Data Mining Citizen Science Results

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2012-12-01

    Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.

  20. Developing a framework for digital objects in the Big Data to Knowledge (BD2K) commons: Report from the Commons Framework Pilots workshop.

    PubMed

    Jagodnik, Kathleen M; Koplev, Simon; Jenkins, Sherry L; Ohno-Machado, Lucila; Paten, Benedict; Schurer, Stephan C; Dumontier, Michel; Verborgh, Ruben; Bui, Alex; Ping, Peipei; McKenna, Neil J; Madduri, Ravi; Pillai, Ajay; Ma'ayan, Avi

    2017-07-01

    The volume and diversity of data in biomedical research have been rapidly increasing in recent years. While such data hold significant promise for accelerating discovery, their use entails many challenges including: the need for adequate computational infrastructure, secure processes for data sharing and access, tools that allow researchers to find and integrate diverse datasets, and standardized methods of analysis. These are just some elements of a complex ecosystem that needs to be built to support the rapid accumulation of these data. The NIH Big Data to Knowledge (BD2K) initiative aims to facilitate digitally enabled biomedical research. Within the BD2K framework, the Commons initiative is intended to establish a virtual environment that will facilitate the use, interoperability, and discoverability of shared digital objects used for research. The BD2K Commons Framework Pilots Working Group (CFPWG) was established to clarify goals and work on pilot projects that address existing gaps toward realizing the vision of the BD2K Commons. This report reviews highlights from a two-day meeting involving the BD2K CFPWG to provide insights on trends and considerations in advancing Big Data science for biomedical research in the United States. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Challenges and potential solutions for big data implementations in developing countries.

    PubMed

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  2. Alliance for Computational Science Collaboration HBCU Partnership at Fisk University. Final Report 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, W. E.

    2004-08-16

    Computational Science plays a big role in research and development in mathematics, science, engineering and biomedical disciplines. The Alliance for Computational Science Collaboration (ACSC) has the goal of training African-American and other minority scientists in the computational science field for eventual employment with the Department of Energy (DOE). The involvements of Historically Black Colleges and Universities (HBCU) in the Alliance provide avenues for producing future DOE African-American scientists. Fisk University has been participating in this program through grants from the DOE. The DOE grant supported computational science activities at Fisk University. The research areas included energy related projects, distributed computing,more » visualization of scientific systems and biomedical computing. Students' involvement in computational science research included undergraduate summer research at Oak Ridge National Lab, on-campus research involving the participation of undergraduates, participation of undergraduate and faculty members in workshops, and mentoring of students. These activities enhanced research and education in computational science, thereby adding to Fisk University's spectrum of research and educational capabilities. Among the successes of the computational science activities are the acceptance of three undergraduate students to graduate schools with full scholarships beginning fall 2002 (one for master degree program and two for Doctoral degree program).« less

  3. Big Data on the Big Screen

    NASA Image and Video Library

    2013-10-17

    The center of the Milky Way galaxy imaged by NASA Spitzer Space Telescope is displayed on a quarter-of-a-billion-pixel, high-definition 23-foot-wide 7-meter LCD science visualization screen at NASA Ames Research Center.

  4. [Social change and sciences in the 20th century].

    PubMed

    Garamvölgyi, J

    1995-12-05

    The symbiotic interdependence of state, economy and science is one of the most significant structural characteristics of the 20th century. This development results from inherent scientific as well as from social procedures and needs, and it has been favoured by the two World Wars, culminating in the Cold War. This led to new structures: institutions of large scale research, think tanks, and the military-industrial complex. Big government, big business, and big science are depending on each other. Parallel to the new way of thinking in physics (Einstein, Bohr and others), finally accomplished by the revolution in cybernetics (Wiener), the traditional borders between disciplines have been overcome. The production of new knowledge is now of primary importance. Today, information proves to be one of the strategic resources which determines prosperity, power and prestige as well as success in economic and political markets.

  5. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip

  6. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"

  7. Biosecurity in the age of Big Data: a conversation with the FBI.

    PubMed

    You, Edward; Kozminski, Keith G

    2015-11-05

    New scientific frontiers and emerging technologies within the life sciences pose many global challenges to society. Big Data is a premier example, especially with respect to individual, national, and international security. Here a Special Agent of the Federal Bureau of Investigation discusses the security implications of Big Data and the need for security in the life sciences. © 2015 Kozminski. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  8. Towards Big Earth Data Analytics: The EarthServer Approach

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.

  9. "small problems, Big Trouble": An Art and Science Collaborative Exhibition Reflecting Seemingly small problems Leading to Big Threats

    NASA Astrophysics Data System (ADS)

    Waller, J. L.; Brey, J. A.

    2014-12-01

    "small problems, Big Trouble" (spBT) is an exhibition of artist Judith Waller's paintings accompanied by text panels written by Earth scientist Dr. James A. Brey and several science researchers and educators. The text panels' message is as much the focus of the show as the art--true interdisciplinarity! Waller and Brey's history of art and earth science collaborations include the successful exhibition "Layers: Places in Peril". New in spBT is extended collaboration with other scientists in order to create awareness of geoscience and other subjects (i.e. soil, parasites, dust, pollutants, invasive species, carbon, ground water contaminants, solar wind) small in scale which pose significant threats. The paintings are the size of a mirror, a symbol suggesting the problems depicted are those we increasingly need to face, noting our collective reflections of shared current and future reality. Naturalistic rendering and abstract form in the art helps reach a broad audience including those familiar with art and those familiar with science. The goal is that gallery visitors gain greater appreciation and understanding of both—and of the sober content of the show as a whole. "small problems, Big Trouble" premiers in Wisconsin April, 2015. As in previous collaborations, Waller and Brey actively utilize art and science (specifically geoscience) as an educational vehicle for active student learning. Planned are interdisciplinary university and area high school activities linked through spBT. The exhibition in a public gallery offers a means to enhance community awareness of and action on scientific issues through art's power to engage people on an emotional level. This AGU presentation includes a description of past Waller and Brey activities: incorporating art and earth science in lab and studio classrooms, producing gallery and museum exhibitions and delivering workshops and other presentations. They also describe how walking the paths of several past earth science disasters continues to inspire new chapters in their "Layers: Places in Peril" exhibit! A slide show includes images of paintings for "small problems, Big Trouble". Brey and Waller will lead a discussion on their process of incorporating broader collaboration with geoscientists and others in an educational art exhibition.

  10. How big is too big or how many partners are needed to build a large project which still can be managed successfully?

    NASA Astrophysics Data System (ADS)

    Henkel, Daniela; Eisenhauer, Anton

    2017-04-01

    During the last decades, the number of large research projects has increased and therewith the requirement for multidisciplinary, multisectoral collaboration. Such complex and large-scale projects pose new competencies to form, manage, and use large, diverse teams as a competitive advantage. For complex projects the effort is magnified because multiple large international research consortia involving academic and non-academic partners, including big industries, NGOs, private and public bodies, all with cultural differences, individually discrepant expectations on teamwork and differences in the collaboration between national and multi-national administrations and research organisations, challenge the organisation and management of such multi-partner research consortia. How many partners are needed to establish and conduct collaboration with a multidisciplinary and multisectoral approach? How much personnel effort and what kinds of management techniques are required for such projects. This presentation identifies advantages and challenges of large research projects based on the experiences made in the context of an Innovative Training Network (ITN) project within Marie Skłodowska-Curie Actions of the European HORIZON 2020 program. Possible strategies are discussed to circumvent and avoid conflicts already at the beginning of the project.

  11. 77 FR 17007 - Kootenai National Forest, Cabinet Ranger District, Montana Pilgrim Timber Sale Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-23

    ... fire use. Big game forage would be enhanced through use of prescribed fire to rejuvenate and increase... open road density in areas managed for big game summer range. Subsequent analyses of potential... for big game species, notably elk, deer, and bears. Generally, these areas are on southerly aspects...

  12. Insights into big sagebrush seedling storage practices

    Treesearch

    Emily C. Overton; Jeremiah R. Pinto; Anthony S. Davis

    2013-01-01

    Big sagebrush (Artemisia tridentata Nutt. [Asteraceae]) is an essential component of shrub-steppe ecosystems in the Great Basin of the US, where degradation due to altered fire regimes, invasive species, and land use changes have led to increased interest in the production of high-quality big sagebrush seedlings for conservation and restoration projects. Seedling...

  13. [Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016].

    PubMed

    Moebus, Susanne; Kuhn, Joseph; Hoffmann, Wolfgang

    2017-11-01

    Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Hellsgate Big Game Winter Range Wildlife Mitigation Project : Annual Report 2008.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Richard P.; Berger, Matthew T.; Rushing, Samuel

    The Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) was proposed by the Confederated Tribes of the Colville Reservation (CTCR) as partial mitigation for hydropower's share of the wildlife losses resulting from Chief Joseph and Grand Coulee Dams. At present, the Hellsgate Project protects and manages 57,418 acres (approximately 90 miles2) for the biological requirements of managed wildlife species; most are located on or near the Columbia River (Lake Rufus Woods and Lake Roosevelt) and surrounded by Tribal land. To date we have acquired about 34,597 habitat units (HUs) towards a total 35,819 HUs lost from original inundationmore » due to hydropower development. In addition to the remaining 1,237 HUs left unmitigated, 600 HUs from the Washington Department of Fish and Wildlife that were traded to the Colville Tribes and 10 secure nesting islands are also yet to be mitigated. This annual report for 2008 describes the management activities of the Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) during the past year.« less

  15. Reviews

    NASA Astrophysics Data System (ADS)

    2004-01-01

    BOOK REVIEWS (99) Complete A-Z Physics Handbook Science Magic in the Kitchen The Science of Cooking Science Experiments You Can Eat WEB WATCH (101) These journal themes are pasta joke Microwave oven Web links CD REVIEW (104) Electricity and Magnetism, KS3 Big Science Comics

  16. Technical challenges for big data in biomedicine and health: data sources, infrastructure, and analytics.

    PubMed

    Peek, N; Holmes, J H; Sun, J

    2014-08-15

    To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

  17. Neoliberal science, Chinese style: Making and managing the 'obesity epidemic'.

    PubMed

    Greenhalgh, Susan

    2016-08-01

    Science and Technology Studies has seen a growing interest in the commercialization of science. In this article, I track the role of corporations in the construction of the obesity epidemic, deemed one of the major public health threats of the century. Focusing on China, a rising superpower in the midst of rampant, state-directed neoliberalization, I unravel the process, mechanisms, and broad effects of the corporate invention of an obesity epidemic. Largely hidden from view, Western firms were central actors at every stage in the creation, definition, and governmental management of obesity as a Chinese disease. Two industry-funded global health entities and the exploitation of personal ties enabled actors to nudge the development of obesity science and policy along lines beneficial to large firms, while obscuring the nudging. From Big Pharma to Big Food and Big Soda, transnational companies have been profiting from the 'epidemic of Chinese obesity', while doing little to effectively treat or prevent it. The China case suggests how obesity might have been constituted an 'epidemic threat' in other parts of the world and underscores the need for global frameworks to guide the study of neoliberal science and policymaking.

  18. Unraveling the Complexities of Life Sciences Data.

    PubMed

    Higdon, Roger; Haynes, Winston; Stanberry, Larissa; Stewart, Elizabeth; Yandl, Gregory; Howard, Chris; Broomall, William; Kolker, Natali; Kolker, Eugene

    2013-03-01

    The life sciences have entered into the realm of big data and data-enabled science, where data can either empower or overwhelm. These data bring the challenges of the 5 Vs of big data: volume, veracity, velocity, variety, and value. Both independently and through our involvement with DELSA Global (Data-Enabled Life Sciences Alliance, DELSAglobal.org), the Kolker Lab ( kolkerlab.org ) is creating partnerships that identify data challenges and solve community needs. We specialize in solutions to complex biological data challenges, as exemplified by the community resource of MOPED (Model Organism Protein Expression Database, MOPED.proteinspire.org ) and the analysis pipeline of SPIRE (Systematic Protein Investigative Research Environment, PROTEINSPIRE.org ). Our collaborative work extends into the computationally intensive tasks of analysis and visualization of millions of protein sequences through innovative implementations of sequence alignment algorithms and creation of the Protein Sequence Universe tool (PSU). Pushing into the future together with our collaborators, our lab is pursuing integration of multi-omics data and exploration of biological pathways, as well as assigning function to proteins and porting solutions to the cloud. Big data have come to the life sciences; discovering the knowledge in the data will bring breakthroughs and benefits.

  19. Big Data: An Opportunity for Collaboration with Computer Scientists on Data-Driven Science

    NASA Astrophysics Data System (ADS)

    Baru, C.

    2014-12-01

    Big data technologies are evolving rapidly, driven by the need to manage ever increasing amounts of historical data; process relentless streams of human and machine-generated data; and integrate data of heterogeneous structure from extremely heterogeneous sources of information. Big data is inherently an application-driven problem. Developing the right technologies requires an understanding of the applications domain. Though, an intriguing aspect of this phenomenon is that the availability of the data itself enables new applications not previously conceived of! In this talk, we will discuss how the big data phenomenon creates an imperative for collaboration among domain scientists (in this case, geoscientists) and computer scientists. Domain scientists provide the application requirements as well as insights about the data involved, while computer scientists help assess whether problems can be solved with currently available technologies or require adaptaion of existing technologies and/or development of new technologies. The synergy can create vibrant collaborations potentially leading to new science insights as well as development of new data technologies and systems. The area of interface between geosciences and computer science, also referred to as geoinformatics is, we believe, a fertile area for interdisciplinary research.

  20. Climate Analytics-As-a-Service (CAaas), Advanced Information Systems, and Services to Accelerate the Climate Sciences.

    NASA Astrophysics Data System (ADS)

    McInerney, M.; Schnase, J. L.; Duffy, D.; Tamkin, G.; Nadeau, D.; Strong, S.; Thompson, J. H.; Sinno, S.; Lazar, D.

    2014-12-01

    The climate sciences represent a big data domain that is experiencing unprecedented growth. In our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with big data that ultimately product societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. Within this framework, cloud computing plays an important role; however, we see it as only one element in a constellation of capabilities that are essential to delivering climate analytics-as-a-service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the big data challenges in this domain. This poster will highlight specific examples of CAaaS using climate reanalysis data, high-performance cloud computing, map reduce, and the Climate Data Services API.

  1. Frontier Fields: Bringing the Distant Universe into View

    NASA Astrophysics Data System (ADS)

    Eisenhamer, Bonnie; Lawton, Brandon L.; Summers, Frank; Ryer, Holly

    2014-06-01

    The Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters that will be taken in parallel with six deep “blank fields.” The three-year long collaborative program centers on observations from NASA’s Great Observatories, who will team up to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically see. Because of the unprecedented views of the universe that will be achieved, the Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. For example, the program provides an opportunity to look back on the history of deep field observations and how they changed (and continue to change) astronomy, while exploring the ways astronomers approach big science problems. As a result, the Space Telescope Science Institute’s Office of Public Outreach has initiated an education and public outreach (E/PO) project to follow the progress of the Frontier Fields program - providing a behind-the-scenes perspective of this observing initiative. This poster will highlight the goals of the Frontier Fields E/PO project and the cost-effective approach being used to bring the program’s results to both the public and educational audiences.

  2. Understanding life together: A brief history of collaboration in biology

    PubMed Central

    Vermeulen, Niki; Parker, John N.; Penders, Bart

    2013-01-01

    The history of science shows a shift from single-investigator ‘little science’ to increasingly large, expensive, multinational, interdisciplinary and interdependent ‘big science’. In physics and allied fields this shift has been well documented, but the rise of collaboration in the life sciences and its effect on scientific work and knowledge has received little attention. Research in biology exhibits different historical trajectories and organisation of collaboration in field and laboratory – differences still visible in contemporary collaborations such as the Census of Marine Life and the Human Genome Project. We employ these case studies as strategic exemplars, supplemented with existing research on collaboration in biology, to expose the different motives, organisational forms and social dynamics underpinning contemporary large-scale collaborations in biology and their relations to historical patterns of collaboration in the life sciences. We find the interaction between research subject, research approach as well as research organisation influencing collaboration patterns and the work of scientists. PMID:23578694

  3. Measuring adolescent science motivation

    NASA Astrophysics Data System (ADS)

    Schumm, Maximiliane F.; Bogner, Franz X.

    2016-02-01

    To monitor science motivation, 232 tenth graders of the college preparatory level ('Gymnasium') completed the Science Motivation Questionnaire II (SMQ-II). Additionally, personality data were collected using a 10-item version of the Big Five Inventory. A subsequent exploratory factor analysis based on the eigenvalue-greater-than-one criterion, extracted a loading pattern, which in principle, followed the SMQ-II frame. Two items were dropped due to inappropriate loadings. The remaining SMQ-II seems to provide a consistent scale matching the findings in literature. Nevertheless, also possible shortcomings of the scale are discussed. Data showed a higher perceived self-determination in girls which seems compensated by their lower self-efficacy beliefs leading to equality of females and males in overall science motivation scores. Additionally, the Big Five personality traits and science motivation components show little relationship.

  4. Beyond Einstein: from the Big Bang to black holes

    NASA Astrophysics Data System (ADS)

    White, Nicholas E.; Diaz, Alphonso V.

    2004-01-01

    How did the Universe begin? Does time have a beginning and an end? Does space have edges? Einstein's theory of relativity replied to these ancient questions with three startling predictions: that the Universe is expanding from a Big Bang; that black holes so distort space and time that time stops at their edges; and that a dark energy could be pulling space apart, sending galaxies forever beyond the edge of the visible Universe. Observations confirm these remarkable predictions, the last finding only four years ago. Yet Einstein's legacy is incomplete. His theory raises - but cannot answer - three profound questions: What powered the Big Bang? What happens to space, time and matter at the edge of a black hole? and, What is the mysterious dark energy pulling the Universe apart? The Beyond Einstein program within NASA's office of space science aims to answer these questions, employing a series of missions linked by powerful new technologies and complementary approaches to shared science goals. The program also serves as a potent force with which to enhance science education and science literacy.

  5. ``Big Bang" for NASA's Buck: Nearly Three Years of EUVE Mission Operations at UCB

    NASA Astrophysics Data System (ADS)

    Stroozas, B. A.; Nevitt, R.; McDonald, K. E.; Cullison, J.; Malina, R. F.

    1999-12-01

    After over seven years in orbit, NASA's Extreme Ultraviolet Explorer (EUVE) satellite continues to perform flawlessly and with no significant loss of science capabilities. EUVE continues to produce important and exciting science results and, with reentry not expected until 2003-2004, many more such discoveries await. In the nearly three years since the outsourcing of EUVE from NASA's Goddard Space Flight Center, the small EUVE operations team at the University of California at Berkeley (UCB) has successfully conducted all aspects of the EUVE mission -- from satellite operations, science and mission planning, and data processing, delivery, and archival, to software support, systems administration, science management, and overall mission direction. This paper discusses UCB's continued focus on automation and streamlining, in all aspects of the Project, as the means to maximize EUVE's overall scientific productivity while minimizing costs. Multitasking, non-traditional work roles, and risk management have led to expanded observing capabilities while achieving significant cost reductions and maintaining the mission's historical 99 return. This work was funded under NASA Cooperative Agreement NCC5-138.

  6. Improving the Interoperability and Usability of NASA Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Walter, J.; Berrick, S. W.; Murphy, K. J.; Mitchell, A. E.; Tilmes, C.

    2014-12-01

    NASA's Earth Science Data and Information System Project (ESDIS) is charged with managing, maintaining, and evolving NASA's Earth Observing System Data and Information System (EOSDIS) and is responsible for processing, archiving, and distributing NASA Earth Science data. The system supports a multitude of missions and serves diverse science research and other user communities. While NASA has made, and continues to make, great strides in the discoverability and accessibility of its earth observation data holdings, issues associated with data interoperability and usability still present significant challenges to realizing the full scientific and societal benefits of these data. This concern has been articulated by multiple government agencies, both U.S. and international, as well as other non-governmental organizations around the world. Among these is the White House Office of Science and Technology Policy who, in response, has launched the Big Earth Data Initiative and the Climate Data Initiative to address these concerns for U.S. government agencies. This presentation will describe NASA's approach for addressing data interoperability and usability issues with our earth observation data.

  7. MIT CSAIL and Lincoln Laboratory Task Force Report

    DTIC Science & Technology

    2016-08-01

    projects have been very diverse, spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications...spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications, computing architectures and...to machine learning systems and algorithms, such as recommender systems, and “Big Data ” analytics . Advanced computing architectures broadly refer to

  8. The big sur ecoregion sudden oak death adaptive management project: ecological monitoring

    Treesearch

    Allison C. Wickland; Kerri M. Frangioso; David M. Rizzo; Ross K. Meentemeyer

    2008-01-01

    The Big Sur area is one of the most ecologically diverse regions in California. Land preservation efforts are well established in Big Sur, including numerous preserves, state parks and the Los Padres National Forest. However, there are still many conservation threats that cut across these areas including exotic species (plants, animals, and pathogens) and alterations...

  9. 76 FR 7867 - Proposed Collection; Comment Request; Cancer Biomedical Informatics Grid® (caBIG®) Support...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... proposed projects to be submitted to the Office of Management and Budget (OMB) for review and approval... Information Technology (CBIIT) launched the enterprise phase of the caBIG [supreg] initiative in early 2007... resources available through the caBIG [supreg] Enterprise Support Network (ESN), including the caBIG [supreg...

  10. An Archaeological Reconnaissance Survey of the Proposed Channel Realignment Area at Big Stone-Whetstone Flood Control Project, Big Stone and Lac Qui Parle Counties, Minnesota.

    DTIC Science & Technology

    1980-08-01

    terms of local collections. Charles Hanson: Mr. Hanson is a local collector who has a sizeable prehistoric collection from the island in Artichoke Lake... Artichoke Lake is located approximately 17 miles northeast of Ortonville and does not pertain to this project. He did indicate that he has done some

  11. What Difference Does Quantity Make? On the Epistemology of Big Data in Biology

    PubMed Central

    Leonelli, Sabina

    2015-01-01

    Is big data science a whole new way of doing research? And what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of big data science does not lie in the sheer quantity of data involved, but rather in (1) the prominence and status acquired by data as commodity and recognised output, both within and outside of the scientific community; and (2) the methods, infrastructures, technologies, skills and knowledge developed to handle data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. To assess this claim, one needs to consider the ways in which data are actually disseminated and used to generate knowledge. Accordingly, this paper reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as prominent infrastructures set up to organise and interpret such data; and examine the wealth and diversity of expertise, resources and conceptual scaffolding that such databases draw upon. This illuminates some of the conditions under which big data need to be curated to support processes of discovery across biological subfields, which in turn highlights the difficulties caused by the lack of adequate curation for the vast majority of data in the life sciences. In closing, I reflect on the difference that data quantity is making to contemporary biology, the methodological and epistemic challenges of identifying and analyzing data given these developments, and the opportunities and worries associated to big data discourse and methods. PMID:25729586

  12. What Difference Does Quantity Make? On the Epistemology of Big Data in Biology.

    PubMed

    Leonelli, Sabina

    2014-06-01

    Is big data science a whole new way of doing research? And what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of big data science does not lie in the sheer quantity of data involved, but rather in (1) the prominence and status acquired by data as commodity and recognised output, both within and outside of the scientific community; and (2) the methods, infrastructures, technologies, skills and knowledge developed to handle data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. To assess this claim, one needs to consider the ways in which data are actually disseminated and used to generate knowledge. Accordingly, this paper reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as prominent infrastructures set up to organise and interpret such data; and examine the wealth and diversity of expertise, resources and conceptual scaffolding that such databases draw upon. This illuminates some of the conditions under which big data need to be curated to support processes of discovery across biological subfields, which in turn highlights the difficulties caused by the lack of adequate curation for the vast majority of data in the life sciences. In closing, I reflect on the difference that data quantity is making to contemporary biology, the methodological and epistemic challenges of identifying and analyzing data given these developments, and the opportunities and worries associated to big data discourse and methods.

  13. Big data in psychology: Introduction to the special issue.

    PubMed

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. DEVELOPING THE TRANSDISCIPLINARY AGING RESEARCH AGENDA: NEW DEVELOPMENTS IN BIG DATA.

    PubMed

    Callaghan, Christian William

    2017-07-19

    In light of dramatic advances in big data analytics and the application of these advances in certain scientific fields, new potentialities exist for breakthroughs in aging research. Translating these new potentialities to research outcomes for aging populations, however, remains a challenge, as underlying technologies which have enabled exponential increases in 'big data' have not yet enabled a commensurate era of 'big knowledge,' or similarly exponential increases in biomedical breakthroughs. Debates also reveal differences in the literature, with some arguing big data analytics heralds a new era associated with the 'end of theory' or which makes the scientific method obsolete, where correlation supercedes causation, whereby science can advance without theory and hypotheses testing. On the other hand, others argue theory cannot be subordinate to data, no matter how comprehensive data coverage can ultimately become. Given these two tensions, namely between exponential increases in data absent exponential increases in biomedical research outputs, and between the promise of comprehensive data coverage and data-driven inductive versus theory-driven deductive modes of enquiry, this paper seeks to provide a critical review of certain theory and literature that offers useful perspectives of certain developments in big data analytics and their theoretical implications for aging research. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Big Data in Psychology: Introduction to Special Issue

    PubMed Central

    Harlow, Lisa L.; Oswald, Frederick L.

    2016-01-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. PMID:27918177

  16. Rethinking big data: A review on the data quality and usage issues

    NASA Astrophysics Data System (ADS)

    Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng

    2016-05-01

    The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.

  17. AirMSPI PODEX Big Sur Ellipsoid Images

    Atmospheric Science Data Center

    2013-12-11

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target 02/03/2013 Ellipsoid-projected   Select link to ...   Version number   For more information, see the  Data Product Specifications (DPS) ...

  18. 'Big data', Hadoop and cloud computing in genomics.

    PubMed

    O'Driscoll, Aisling; Daugelaite, Jurate; Sleator, Roy D

    2013-10-01

    Since the completion of the Human Genome project at the turn of the Century, there has been an unprecedented proliferation of genomic sequence data. A consequence of this is that the medical discoveries of the future will largely depend on our ability to process and analyse large genomic data sets, which continue to expand as the cost of sequencing decreases. Herein, we provide an overview of cloud computing and big data technologies, and discuss how such expertise can be used to deal with biology's big data sets. In particular, big data technologies such as the Apache Hadoop project, which provides distributed and parallelised data processing and analysis of petabyte (PB) scale data sets will be discussed, together with an overview of the current usage of Hadoop within the bioinformatics community. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  20. Challenges and Potential Solutions for Big Data Implementations in Developing Countries

    PubMed Central

    Mayan, J.C; García, M.J.; Almerares, A.A.; Househ, M.

    2014-01-01

    Summary Background The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. Objectives To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. Methods A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: “big data”, “developing countries”, “data mining”, “health information systems”, and “computing methodologies”. A thematic review of selected articles was performed. Results There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. Conclusions The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs. PMID:25123719

  1. Disproof of Big Bang's Foundational Expansion Redshift Assumption Overthrows the Big Bang and Its No-Center Universe and Is Replaced by a Spherically Symmetric Model with Nearby Center with the 2.73 K CMR Explained by Vacuum Gravity and Doppler Effects

    NASA Astrophysics Data System (ADS)

    Gentry, Robert

    2015-04-01

    Big bang theory holds its central expansion redshift assumption quickly reduced the theorized radiation flash to ~ 1010 K, and then over 13.8 billion years reduced it further to the present 2.73 K CMR. Weinberg claims this 2.73 K value agrees with big bang theory so well that ``...we can be sure that this radiation was indeed left over from a time about a million years after the `big bang.' '' (TF3M, p180, 1993 ed.) Actually his conclusion is all based on big bang's in-flight wavelength expansion being a valid physical process. In fact all his surmising is nothing but science fiction because our disproof of GR-induced in-flight wavelength expansion [1] definitely proves the 2.73 K CMR could never have been the wavelength-expanded relic of any radiation, much less the presumed big bang's. This disproof of big bang's premier prediction is a death blow to the big bang as it is also to the idea that the redshifts in Hubble's redshift relation are expansion shifts; this negates Friedmann's everywhere-the-same, no-center universe concept and proves it does have a nearby Center, a place which can be identified in Psalm 103:19 and in Revelation 20:11 as the location of God's eternal throne. Widely published (Science, Nature, ARNS) evidence of Earth's fiat creation will also be presented. The research is supported by the God of Creation. This paper [1] is in for publication.

  2. EarthServer: a Summary of Achievements in Technology, Services, and Standards

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2015-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level coverage query language. Reviewers have attested EarthServer that "With no doubt the project has been shaping the Big Earth Data landscape through the standardization activities within OGC, ISO and beyond". We present the project approach, its outcomes and impact on standardization and Big Data technology, and vistas for the future.

  3. Opportunities and challenges of big data for the social sciences: The case of genomic data.

    PubMed

    Liu, Hexuan; Guo, Guang

    2016-09-01

    In this paper, we draw attention to one unique and valuable source of big data, genomic data, by demonstrating the opportunities they provide to social scientists. We discuss different types of large-scale genomic data and recent advances in statistical methods and computational infrastructure used to address challenges in managing and analyzing such data. We highlight how these data and methods can be used to benefit social science research. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Focal Plant Observations as a Standardised Method for Pollinator Monitoring: Opportunities and Limitations for Mass Participation Citizen Science.

    PubMed

    Roy, Helen E; Baxter, Elizabeth; Saunders, Aoine; Pocock, Michael J O

    2016-01-01

    Recently there has been increasing focus on monitoring pollinating insects, due to concerns about their declines, and interest in the role of volunteers in monitoring pollinators, particularly bumblebees, via citizen science. The Big Bumblebee Discovery was a one-year citizen science project run by a partnership of EDF Energy, the British Science Association and the Centre for Ecology & Hydrology which sought to assess the influence of the landscape at multiple scales on the diversity and abundance of bumblebees. Timed counts of bumblebees (Bombus spp.; identified to six colour groups) visiting focal plants of lavender (Lavendula spp.) were carried out by about 13 000 primary school children (7-11 years old) from over 4000 schools across the UK. 3948 reports were received totalling 26 868 bumblebees. We found that while the wider landscape type had no significant effect on reported bumblebee abundance, the local proximity to flowers had a significant effect (fewer bumblebees where other flowers were reported to be >5m away from the focal plant). However, the rate of mis-identifcation, revealed by photographs uploaded by participants and a photo-based quiz, was high. Our citizen science results support recent research on the importance of local flocal resources on pollinator abundance. Timed counts of insects visiting a lure plant is potentially an effective approach for standardised pollinator monitoring, engaging a large number of participants with a simple protocol. However, the relatively high rate of mis-identifications (compared to reports from previous pollinator citizen science projects) highlights the importance of investing in resources to train volunteers. Also, to be a scientifically valid method for enquiry, citizen science data needs to be sufficiently high quality, so receiving supporting evidence (such as photographs) would allow this to be tested and for records to be verified.

  5. Commentary: Leveraging discovery science to advance child and adolescent psychiatric research--a commentary on Zhao and Castellanos 2016.

    PubMed

    Mennes, Maarten

    2016-03-01

    'Big Data' and 'Population Imaging' are becoming integral parts of inspiring research aimed at delineating the biological underpinnings of psychiatric disorders. The scientific strategies currently associated with big data and population imaging are typically embedded in so-called discovery science, thereby pointing to the hypothesis-generating rather than hypothesis-testing nature of discovery science. In this issue, Yihong Zhao and F. Xavier Castellanos provide a compelling overview of strategies for discovery science aimed at progressing our understanding of neuropsychiatric disorders. In particular, they focus on efforts in genetic and neuroimaging research, which, together with extended behavioural testing, form the main pillars of psychopathology research. © 2016 Association for Child and Adolescent Mental Health.

  6. Not Just for Big Dogs: the NSF Career Program from AN Undergraduate College Perspective

    NASA Astrophysics Data System (ADS)

    Harpp, K. S.

    2011-12-01

    Relatively few NSF CAREER grants are awarded to faculty at undergraduate colleges, leading to a perception that the program is geared for major research institutions. The goal of this presentation is to dispel this misconception by describing a CAREER grant at a small, liberal arts institution. Because high quality instruction is the primary mission of undergraduate colleges, the career development plan for this proposal was designed to use research as a teaching tool. Instead of distinct sets of objectives for the research and education components, the proposal's research and teaching plans were integrated across the curriculum to maximize opportunities for undergraduate engagement. The driving philosophy was that students learn science by doing it. The proposal plan therefore created opportunities for students to be involved in hands-on, research-driven projects from their first through senior years. The other guiding principle was that students become engaged in science when they experience its real life applications. Stage 1 of the project provided mechanisms to draw students into science in two ways. The first was development of an inquiry-based curriculum for introductory classes, emphasizing practical applications and hands-on learning. The goal was to energize, generate confidence, and provide momentum for early science students to pursue advanced courses. The second mechanism was the development of a science outreach program for area K-9 schools, designed and implemented by undergraduates, an alternative path for students to discover science. Stages 2 and 3 consisted of increasingly advanced project-based courses, with in-depth training in research skills. The courses were designed along chemical, geological, and environmental themes, to capture the most student interest. The students planned their projects within a set of constraints designed to lead them to fundamental concepts and centered on questions of importance to the local community, thereby reinforcing the accessibility and relevance of science. The final stage was independent research with the PI on a focused research question, the equivalent of the research plan in most CAREER proposals. The overarching research objectives had to satisfy 2 criteria: a) questions had to be accessible and compelling (e.g., investigating the origin of volcanic islands in the Galapagos); and b) the project had to be divisible into tractable units for students, yet substantive enough for presentation at national meetings. Together, the projects ultimately addressed the PI's major research questions. The impacts of this grant were far-reaching. First, it supported a multi-year research project for the PI, which ultimately led to publications and successful proposals. More than 25 undergraduates carried out research projects, most presenting at national conferences. The outreach component engaged over 60 undergraduates; at least 20 have pursued science-teaching careers and another 25 have gone on to science graduate studies. The undergraduates brought hands-on science to more than 15,000 school children. Less obviously, the grant provided leverage for the PI to expand projects beyond their initial scope, involving more students and establishing on-going collaboration with colleagues at research institutions that have continued beyond the life of the grant.

  7. [Structural Change, Contextuality, and Transfer in Health Promotion--Sustainable Implementation of the BIG Project].

    PubMed

    Rütten, A; Frahsa, A; Rosenhäger, N; Wolff, A

    2015-09-01

    The BIG approach aims at promoting physical activity and health among socially disadvantaged women. BIG has been developed and sustainably implemented in Erlangen/Bavaria. Subsequently, it has been transferred to other communities and states in Germany. Crucial factors for sustainability and transfer in BIG are (1) lifestyle and policy analysis, (2) assets approach, (3) empowerment of target group, (4) enabling of policy-makers and professionals. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Big Data & Datamining: Using APIs to computationally determine who follows space science, & what do they care about?

    NASA Astrophysics Data System (ADS)

    Gay, Pamela L.; Bakerman, Maya; Graziano, Nancy; Murph, Susan; Reiheld, Alison; CosmoQuest

    2017-10-01

    In today's connected world, scientists & space science projects are turning to social media outlets like Twitter to share our achievements, request aid, & discuss the issues of our profession. Maintaining these disparate feeds requires time & resources that are already in short supply. To justify these efforts, we must examine the data to determine: are we speaking to our intended audiences; are our varied efforts needed; & what types of messages achieve the greatest interactions. The software used to support this project is available on GitHub.Previously, it has been unclear if our day-to-day social media efforts have been merely preaching to one homogeneous choir from which we have all drawn our audiences, or if our individual efforts have been able to reach into different communities to multiply our impact. In this preliminary study, we examine the social media audiences of several space science Twitter feeds that relate to: podcasting; professional societies; individual programs; & individuals. This study directly measures the overlap in audiences & the diversity of interests held by these audiences. Through statistical analysis, we can discern if these audiences are all drawn from one single population, or if we are sampling different base populations with different feeds.The data generated in this project allow us to look beyond how our audiences interact with space science, with the added benefit of revealing their other interests. These interests are reflected by the non-space science accounts they follow on Twitter. This information will allow us to effectively recruit new people from space science adjacent interests.After applying large data analytics & statistics to social media interactions, we can model online communications, audience population types, & the causal relationships between how we tweet &how our audiences interact. With this knowledge, we are then able to institute reliable communications & effective interactions with our target audience.This work is supported through NASA cooperative agreement NNX17AD20A.

  9. Force Projection, Strategic Agility and the Big Meltdown

    DTIC Science & Technology

    2001-05-18

    UNLIMITED Number of Pages 29 ii Abstract of FORCE PROJECTION, STRATEGIC AGILITY AND THE BIG MELTDOWN Due to global warming , the polar icepack which...INTRODUCTION The polar icecap which covers the Arctic Ocean is melting. It is a well-known, scientific fact. Global warming is the generally...operational factors and functions, as applicable. 3 CHAPTER II BACKGROUND Global Warming and the Arctic During this and the last century, researchers have

  10. Picture of the Week: Making the (reactive) case for explosives science

    Science.gov Websites

    : small organisms, big impacts Biocrusts: small organisms, big impacts View on Flickr Bismuth and tin on the rocks Bismuth and tin on the rocks View on Flickr Need to Know: Van Allen Belts Need to Know: Van

  11. Streaming Swarm of Nano Space Probes for Modern Analytical Methods Applied to Planetary Science

    NASA Astrophysics Data System (ADS)

    Vizi, P. G.; Horvath, A. F.; Berczi, Sz.

    2017-11-01

    Streaming swarms gives possibilities to collect data from big fields in one time. The whole streaming fleet possible to behave like one big organization and can be realized as a planetary mission solution with stream type analytical methods.

  12. Opening the Black Box: Understanding the Science Behind Big Data and Predictive Analytics.

    PubMed

    Hofer, Ira S; Halperin, Eran; Cannesson, Maxime

    2018-05-25

    Big data, smart data, predictive analytics, and other similar terms are ubiquitous in the lay and scientific literature. However, despite the frequency of usage, these terms are often poorly understood, and evidence of their disruption to clinical care is hard to find. This article aims to address these issues by first defining and elucidating the term big data, exploring the ways in which modern medical data, both inside and outside the electronic medical record, meet the established definitions of big data. We then define the term smart data and discuss the transformations necessary to make big data into smart data. Finally, we examine the ways in which this transition from big to smart data will affect what we do in research, retrospective work, and ultimately patient care.

  13. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness.

    PubMed

    Dove, Edward S; Özdemir, Vural

    2015-09-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, "extreme centrism", and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics-separate and together-have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness . By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit.

  14. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness

    PubMed Central

    Dove, Edward S.; Özdemir, Vural

    2015-01-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, “extreme centrism”, and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics—separate and together—have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness. By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit. PMID:26345196

  15. A New Approach to Data Publication in Ocean Sciences

    NASA Astrophysics Data System (ADS)

    Lowry, Roy; Urban, Ed; Pissierssens, Peter

    2009-12-01

    Data are collected from ocean sciences activities that range from a single investigator working in a laboratory to large teams of scientists cooperating on big, multinational, global ocean research projects. What these activities have in common is that all result in data, some of which are used as the basis for publications in peer-reviewed journals. However, two major problems regarding data remain. First, many data valuable for understanding ocean physics, chemistry, geology, biology, and how the oceans operate in the Earth system are never archived or made accessible to other scientists. Data underlying traditional journal articles are often difficult to obtain. Second, when scientists do contribute data to databases, their data become freely available, with little acknowledgment and no contribution to their career advancement. To address these problems, stronger ties must be made between data repositories and academic journals, and a “digital backbone” needs to be created for data related to journal publications.

  16. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  17. Beyond Einstein: From the Big Bang to Black Holes

    NASA Astrophysics Data System (ADS)

    White, N.

    Beyond Einstein is a science-driven program of missions, education and outreach, and technology, to address three questions: What powered the Big Bang? What happens to space, time, and matter at the edge of a Black Hole? What is the mysterious Dark Energy pulling the universe apart? To address the science objectives, Beyond Einstein contains several interlinked elements. The strategic missions Constellation-X and LISA primarily investigate the nature of black holes. Constellation-X is a spectroscopic observatory that uses X-ray emitting atoms as clocks to follow the fate of matter falling into black holes. LISA will be the first space-based gravitational wave observatory uses gravitational waves to measure the dynamic structure of space and time around black holes. Moderate sized probes that are fully competed, peer-reviewed missions (300M-450M) launched every 3-5 years to address the focussed science goals: 1) Determine the nature of the Dark Energy that dominates the universe, 2) Search for the signature of the beginning of the Big Bang in the microwave background and 3) Take a census of Black Holes of all sizes and ages in the universe. The final element is a Technology Program to enable ultimate Vision Missions (after 2015) to directly detect gravitational waves echoing from the beginning of the Big Bang, and to directly image matter near the event horizon of a Black Hole. An associated Education and Public Outreach Program will inspire the next generation of scientists, and support national science standards and benchmarks.

  18. Big agronomic data validates an oxymoron: Sustainable intensification under climate change

    USDA-ARS?s Scientific Manuscript database

    Crop science is increasingly embracing big data to reconcile the apparent rift between intensification of food production and sustainability of a steadily stressed production base. A strategy based on long-term agroecosystem research and modeling simulation of crops, crop rotations and cropping sys...

  19. The International Big History Association

    ERIC Educational Resources Information Center

    Duffy, Michael; Duffy, D'Neil

    2013-01-01

    IBHA, the International Big History Association, was organized in 2010 and "promotes the unified, interdisciplinary study and teaching of history of the Cosmos, Earth, Life, and Humanity." This is the vision that Montessori embraced long before the discoveries of modern science fleshed out the story of the evolving universe. "Big…

  20. Hawaii

    Atmospheric Science Data Center

    2014-05-15

    article title:  Big Island, Hawaii     View Larger ... Multi-angle Imaging SpectroRadiometer (MISR) images of the Big Island of Hawaii, April - June 2000. The images have been rotated so that ... NASA's Goddard Space Flight Center, Greenbelt, MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science ...

  1. Translating Big Data into Big Climate Ideas: Communicating Future Climate Scenarios to Increase Interdisciplinary Engagement.

    EPA Science Inventory

    Climate change has emerged as the significant environmental challenge of the 21st century. Therefore, understanding our changing world has forced researchers from many different fields of science to join together to tackle complicated research questions. The climate change resear...

  2. Teaching Teachers: Bringing First-Rate Science to the Elementary Classroom. An NSTA Press Journals Collection.

    ERIC Educational Resources Information Center

    Smith, Betty, Ed.

    This document presents a collection of papers published in the "Teaching Teachers" column in the elementary-level journal, "Science and Children." Contents include: (1) "Science is Part of the Big Picture: Teachers Become Science Learners" (Anita Greenwood); (2) "Reaching the Reluctant Science Teacher: Learning How To Teach Inquiry-Based Science"…

  3. Bridging the knowledge gap between Big Data producers and consumers

    NASA Astrophysics Data System (ADS)

    Peng, G. S.; Worley, S. J.

    2015-12-01

    Most weather data is produced, disseminated and consumed by expert users in large national operational centers or laboratories. Data 'ages' off their systems in days or weeks. While archives exist, would-be users often lack the credentials necessary to obtain an account to access or search its contents. Moreover, operational centers and many national archives lack the mandate and the resources to serve non-expert users. The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA), rda.ucar.edu, was created over 40 years ago to collect data for NCAR's internal Big Science projects such as the NCEP/NCAR Reanalysis Project. Over time, the data holdings have grown to 1.8+ Petabytes spanning 600+ datasets. The user base has also grown; in 2014, we served 1.1 Petabytes of data to over 11,000 unique users. The RDA works with national centers, such as NCEP, ECMWF and JMA to make their data available to worldwide audiences and mutually support data access at the production source. We have become not just an open-access data center, but also a data education center. Each dataset archived at the RDA is assigned to a data specialist (DS) who curates the data. If a user has a question not answered in the dataset information web pages prepared by the DS, they can call or email a skilled DS for further clarification. The RDA's diverse staff—with academic training in meteorology, oceanography, engineering (electrical, civil, ocean and database), mathematics, physics, chemistry and information science—means we likely have someone who "speaks your language." Erroneous data assumptions are the Achilles heel of Big Data. It doesn't matter how much data you crunch if the data is not what you think it is. Data discovery is another difficult Big Data problem; one can only solve problems with data if one can find the right data. Metadata, both machine and human-generated, underpin the RDA data search tools. The RDA has stepped in to fill the gap between data producers and users.

  4. “Big Data” for breast cancer: where to look and what you will find

    PubMed Central

    Clare, Susan E; Shaw, Pamela L

    2016-01-01

    Accessing the massive amount of breast cancer data that are currently publicly available may seem daunting to the brand new graduate student embarking on his/her first project or even to the seasoned lab leader, who may wish to explore a new avenue of investigation. In this review, we provide an overview of data resources focusing on high-throughput data and on cancer-related data resources. Although not intended as an exhaustive list, the information included in this review will provide a jumping-off point with descriptions of and links to the various data resources of interest. The review is divided into six sections: (1) compendia of data resources; (2) biomolecular repository “Hubs”; (3) a list of cancer-related data resources, which provides information on contents of the resource and whether the resource enables upload and analysis of investigator provided data; (4) a list of seminal publications containing specific breast cancer data, e.g., publications from METBRIC, Sanger, TCGA; (5) a list of journals focused on data science that include cancer-related “Big Data”; and (6) miscellaneous resources. PMID:28164152

  5. Using the Big Six Research Process. The Coconut Crab from Guam and Other Stories: Writing Myths, Fables, and Tall Tales.

    ERIC Educational Resources Information Center

    Jansen, Barbara A.; Culpepper, Susan N.

    1996-01-01

    Using the Big Six research process, students at Live Oak Elementary (Round Rock, TX) supplemented information from traditional print and electronic sources with e-mail exchanges around the world to complete a library research collaborative project culminating in an original folk tale. Describes the Big Six process and how it was applied. (PEN)

  6. The eastern states exposition: an exploration of Big E tourist expenditures

    Treesearch

    Robert S. Bristow; Heather Cantillon

    2001-01-01

    The purpose of this paper is to prepare a visitor economic expenditure study for the 1999 Eastern States Exposition, better known as the Big E. The study was executed as part of a class project in Recreation Geography offered the Fall 1999 semester at Westfield State College. The students undertook an economic expenditure study at the Big E by studying tourism...

  7. Characterizing the changes in teaching practice during first semester implementation of an argument-based inquiry approach in a middle school science classroom

    NASA Astrophysics Data System (ADS)

    Pinney, Brian Robert John

    The purpose of this study was to characterize ways in which teaching practice in classroom undergoing first semester implementation of an argument-based inquiry approach changes in whole-class discussion. Being that argument is explicitly called for in the Next Generation Science Standards and is currently a rare practice in teaching, many teachers will have to transform their teaching practice for inclusion of this feature. Most studies on Argument-Based Inquiry (ABI) agree that development of argument does not come easily and is only acquired through practice. Few studies have examined the ways in which teaching practice changes in relation to the big idea or disciplinary core idea (NGSS), the development of dialogue, and/or the development of argument during first semester implementation of an argument-based inquiry approach. To explore these areas, this study posed three primary research questions: (1) How does a teacher in his first semester of Science Writing Heuristic professional development make use of the "big idea"?, (1a) Is the indicated big idea consistent with NGSS core concepts?, (2) How did the dialogue in whole-class discussion change during the first semester of argument-based inquiry professional development?, (3) How did the argument in whole-class discussion change during the first semester of argument-based inquiry professional development? This semester-long study that took place in a middle school in a rural Midwestern city was grounded in interactive constructivism, and utilized a qualitative design to identify the ways in which the teacher utilized big ideas and how dialogue and argumentative dialogue developed over time. The purposefully selected teacher in this study provided a unique situation where he was in his first semester of professional development using the Science Writing Heuristic Approach to argument-based inquiry with 19 students who had two prior years' experience in ABI. Multiple sources of data were collected, including classroom video with transcripts, teacher interview, researcher field notes, student journals, teacher lesson plans from previous years, and a student questionnaire. Data analysis used a basic qualitative approach. The results showed (1) only the first time period had a true big idea, while the other two units contained topics, (2) each semester contained a similar use for the given big idea, though its role in the class was reduced after the opening activity, (3) the types of teacher questions shifted toward students explaining their comprehension of ideas and more students were involved in discussing each idea and for more turns of talk than in earlier time periods, (4) understanding science term definitions became more prominent later in the semester, with more stating science terms occurring earlier in the semester, (5) no significant changes were seen to the use of argument or claims and evidence throughout the study. The findings have informed theory and practice about science argumentation, the practice of whole-class dialogue, and the understanding of practice along four aspects: (1) apparent lack of understanding about big ideas and how to utilize them as the central organizing feature of a unit, (2) independent development of dialogue and argument, (3) apparent lack of understanding about the structure of argument and use of basic terminology with argument and big ideas, (4) challenges of ABI implementation. This study provides insight into the importance of prolonged and persistent professional development with ABI in teaching practice.

  8. ESIP's Earth Science Knowledge Graph (ESKG) Testbed Project: An Automatic Approach to Building Interdisciplinary Earth Science Knowledge Graphs to Improve Data Discovery

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Jiang, Y.; Burgess, A. B.

    2017-12-01

    Big Earth observation data have been produced, archived and made available online, but discovering the right data in a manner that precisely and efficiently satisfies user needs presents a significant challenge to the Earth Science (ES) community. An emerging trend in information retrieval community is to utilize knowledge graphs to assist users in quickly finding desired information from across knowledge sources. This is particularly prevalent within the fields of social media and complex multimodal information processing to name but a few, however building a domain-specific knowledge graph is labour-intensive and hard to keep up-to-date. In this work, we update our progress on the Earth Science Knowledge Graph (ESKG) project; an ESIP-funded testbed project which provides an automatic approach to building a dynamic knowledge graph for ES to improve interdisciplinary data discovery by leveraging implicit, latent existing knowledge present within across several U.S Federal Agencies e.g. NASA, NOAA and USGS. ESKG strengthens ties between observations and user communities by: 1) developing a knowledge graph derived from various sources e.g. Web pages, Web Services, etc. via natural language processing and knowledge extraction techniques; 2) allowing users to traverse, explore, query, reason and navigate ES data via knowledge graph interaction. ESKG has the potential to revolutionize the way in which ES communities interact with ES data in the open world through the entity, spatial and temporal linkages and characteristics that make it up. This project enables the advancement of ESIP collaboration areas including both Discovery and Semantic Technologies by putting graph information right at our fingertips in an interactive, modern manner and reducing the efforts to constructing ontology. To demonstrate the ESKG concept, we will demonstrate use of our framework across NASA JPL's PO.DAAC, NOAA's Earth Observation Requirements Evaluation System (EORES) and various USGS systems.

  9. Climate Data Service in the FP7 EarthServer Project

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Grazia Veratelli, Maria

    2013-04-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In order to demonstrate the feasibility of the approach, six thematic Lighthouse Applications (Cryospheric Science, Airborne Science, Atmospheric/ Climate Science, Geology, Oceanography, and Planetary Science), each with 100+ TB, are implemented. Scope of the Atmospheric/Climate lighthouse application (Climate Data Service) is to implement the system containing global to regional 2D / 3D / 4D datasets retrieved either from satellite observations, from numerical modelling and in-situ observations. Data contained in the Climate Data Service regard atmospheric profiles of temperature / humidity, aerosol content, AOT, and cloud properties provided by entities such as the European Centre for Mesoscale Weather Forecast (ECMWF), the Austrian Meteorological Service (Zentralanstalt für Meteorologie und Geodynamik - ZAMG), the Italian National Agency for new technologies, energies and sustainable development (ENEA), and the Sweden's Meteorological and Hydrological Institute (Sveriges Meteorologiska och Hydrologiska Institut -- SMHI). The system, through an easy-to-use web application permits to browse the loaded data, visualize their temporal evolution on a specific point with the creation of 2D graphs of a single field, or compare different fields on the same point (e.g. temperatures from different models and satellite observations), and visualize maps of specific fields superimposed with high resolution background maps. All data access operations and display are performed by means of OGC standard operations namely WMS, WCS and WCPS. The EarthServer project has just started its second year over a 3-years development plan: the present status the system contains subsets of the final database, with the scope of demonstrating I/O modules and visualization tools. At the end of the project all datasets will be available to the users.

  10. Challenges of Big Data in Educational Assessment

    ERIC Educational Resources Information Center

    Gibson, David C.; Webb, Mary; Ifenthaler, Dirk

    2015-01-01

    This paper briefly discusses four measurement challenges of data science or "big data" in educational assessments that are enabled by technology: 1. Dealing with change over time via time-based data. 2. How a digital performance space's relationships interact with learner actions, communications and products. 3. How layers of…

  11. Expanding Evidence Approaches for Learning in a Digital World

    ERIC Educational Resources Information Center

    Means, Barbara; Anderson, Kea

    2013-01-01

    This report describes how big data and an evidence framework can align across five contexts of educational improvement. It explains that before working with big data, there is an important prerequisite: the proposed innovation should align with deeper learning objectives and should incorporate sound learning sciences principles. New curriculum…

  12. BIG: a large-scale data integration tool for renal physiology.

    PubMed

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  13. Developing Student Science and Information Literacy through Contributions to the Society of Exploration Geophysicists (SEG) Wiki

    NASA Astrophysics Data System (ADS)

    Guertin, L. A.; Farley, I.; Geary, A.

    2016-12-01

    Introductory-level Earth science courses provide the opportunity for science and non-science majors to expand discipline-specific content knowledge while enhancing skill sets applicable to all disciplines. The outcomes of the student work can then benefit the education and outreach efforts of an international organization - in this case, a wiki devoted exclusively to the geosciences, managed by the Society of Exploration Geophysicists (SEG). The course Environment Earth at Penn State Brandywine is a general education science course with the overarching course goal for students to understand, communicate examples, and make informed decisions relating to big ideas and fundamental concepts of Earth science. To help accomplish this goal, students carry out a semester-long digital engaged scholarship project that benefits the users of the SEG Wiki (http://wiki.seg.org/). To begin with developing the literacy of students and their ability to read, interpret, and evaluate sources of scientific news, the first assignment requires students to write an annotated bibliography on a specific topic that serves as the foundation for a new SEG Wiki article. Once students have collected and summarized information from reliable sources, students learn how writing for a wiki is different than writing a term paper and begin drafting their wiki page. Students peer review each other's work for content and clarity before publishing their work on the SEG wiki. Students respond positively to this project, reporting a better understanding of and respect towards the authors of online wiki pages, as well as an overall satisfaction of knowing their work will benefit others. Links to student-generated pages and instructional materials can be found at: http://sites.psu.edu/segwiki/.

  14. Precision Nutrition 4.0: A Big Data and Ethics Foresight Analysis--Convergence of Agrigenomics, Nutrigenomics, Nutriproteomics, and Nutrimetabolomics.

    PubMed

    Özdemir, Vural; Kolker, Eugene

    2016-02-01

    Nutrition is central to sustenance of good health, not to mention its role as a cultural object that brings together or draws lines among societies. Undoubtedly, understanding the future paths of nutrition science in the current era of Big Data remains firmly on science, technology, and innovation strategy agendas around the world. Nutrigenomics, the confluence of nutrition science with genomics, brought about a new focus on and legitimacy for "variability science" (i.e., the study of mechanisms of person-to-person and population differences in response to food, and the ways in which food variably impacts the host, for example, nutrient-related disease outcomes). Societal expectations, both public and private, and claims over genomics-guided and individually-tailored precision diets continue to proliferate. While the prospects of nutrition science, and nutrigenomics in particular, are established, there is a need to integrate the efforts in four Big Data domains that are naturally allied--agrigenomics, nutrigenomics, nutriproteomics, and nutrimetabolomics--that address complementary variability questions pertaining to individual differences in response to food-related environmental exposures. The joint use of these four omics knowledge domains, coined as Precision Nutrition 4.0 here, has sadly not been realized to date, but the potentials for such integrated knowledge innovation are enormous. Future personalized nutrition practices would benefit from a seamless planning of life sciences funding, research, and practice agendas from "farm to clinic to supermarket to society," and from "genome to proteome to metabolome." Hence, this innovation foresight analysis explains the already existing potentials waiting to be realized, and suggests ways forward for innovation in both technology and ethics foresight frames on precision nutrition. We propose the creation of a new Precision Nutrition Evidence Barometer for periodic, independent, and ongoing retrieval, screening, and aggregation of the relevant life sciences data. For innovation in Big Data ethics oversight, we suggest "nested governance" wherein the processes of knowledge production are made transparent in the continuum from life sciences and social sciences to humanities, and where each innovation actor reports to another accountability and transparency layer: scientists to ethicists, and ethicists to scholars in the emerging field of ethics-of-ethics. Such nested innovation ecosystems offer safety against innovation blind spots, calibrate visible/invisible power differences in the cultures of science or ethics, and ultimately, reducing the risk of "paper values"--what people say--and "real values"--what innovation actors actually do. We are optimistic that the convergence of nutrigenomics with nutriproteomics, nutrimetabolomics, and agrigenomics can build a robust, sustainable, and trustworthy precision nutrition 4.0 agenda, as articulated in this Big Data and ethics foresight analysis.

  15. Laser micro-structuring of surfaces for applications in materials and biomedical science

    NASA Astrophysics Data System (ADS)

    Sarzyński, Antoni; Marczak, Jan; Strzelec, Marek; Rycyk, Antoni; CzyŻ, Krzysztof; Chmielewska, Danuta

    2016-12-01

    Laser radiation is used, among others, for surface treatment of various materials. At the Institute of Optoelectronics, under the direction of the late Professor Jan Marczak, a number of works in the field of laser materials processing were performed. Among them special recognition deserves flagship work of Professor Jan Marczak: implementation in Poland laser cleaning method of artworks. Another big project involved the direct method of laser interference lithography. These two projects have already been widely discussed in many national and international scientific conferences. They will also be discussed at SLT2016. In addition to these two projects in the Laboratory of Lasers Applications many other works have been carried out, some of which will be separately presented at the SLT2016 Conference. These included laser decorating of ceramics and glass (three projects completed in cooperation with the Institute of Ceramics and Building Materials), interference structuring medical implants (together with the Warsaw University of Technology), testing the adhesion of thin layers (project implemented together with IFTR PAS), structuring layers of DLC for growing endothelial cells (together with IMMS PAS), engraving glass for microfluidic applications, metal marking, sapphire cutting and finally the production of microsieves for separating of blood cells.

  16. Mapping Our Genes: The Genome Projects: How Big, How Fast

    DOE R&D Accomplishments Database

    1988-04-01

    For the past 2 years, scientific and technical journals in biology and medicine have extensively covered a debate about whether and how to determine the function and order of human genes on human chromosomes and when to determine the sequence of molecular building blocks that comprise DNA in those chromosomes. In 1987, these issues rose to become part of the public agenda. The debate involves science, technology, and politics. Congress is responsible for ?writing the rules? of what various federal agencies do and for funding their work. This report surveys the points made so far in the debate, focusing on those that most directly influence the policy options facing the US Congress. Congressional interest focused on how to assess the rationales for conducting human genome projects, how to fund human genome projects (at what level and through which mechanisms), how to coordinate the scientific and technical programs of the several federal agencies and private interests already supporting various genome projects, and how to strike a balance regarding the impact of genome projects on international scientific cooperation and international economic competition in biotechnology. The Office of Technology Assessment (OTA) prepared this report with the assistance of several hundred experts throughout the world.

  17. On the visualization of water-related big data: extracting insights from drought proxies' datasets

    NASA Astrophysics Data System (ADS)

    Diaz, Vitali; Corzo, Gerald; van Lanen, Henny A. J.; Solomatine, Dimitri

    2017-04-01

    Big data is a growing area of science where hydroinformatics can benefit largely. There have been a number of important developments in the area of data science aimed at analysis of large datasets. Such datasets related to water include measurements, simulations, reanalysis, scenario analyses and proxies. By convention, information contained in these databases is referred to a specific time and a space (i.e., longitude/latitude). This work is motivated by the need to extract insights from large water-related datasets, i.e., transforming large amounts of data into useful information that helps to better understand of water-related phenomena, particularly about drought. In this context, data visualization, part of data science, involves techniques to create and to communicate data by encoding it as visual graphical objects. They may help to better understand data and detect trends. Base on existing methods of data analysis and visualization, this work aims to develop tools for visualizing water-related large datasets. These tools were developed taking advantage of existing libraries for data visualization into a group of graphs which include both polar area diagrams (PADs) and radar charts (RDs). In both graphs, time steps are represented by the polar angles and the percentages of area in drought by the radios. For illustration, three large datasets of drought proxies are chosen to identify trends, prone areas and spatio-temporal variability of drought in a set of case studies. The datasets are (1) SPI-TS2p1 (1901-2002, 11.7 GB), (2) SPI-PRECL0p5 (1948-2016, 7.91 GB) and (3) SPEI-baseV2.3 (1901-2013, 15.3 GB). All of them are on a monthly basis and with a spatial resolution of 0.5 degrees. First two were retrieved from the repository of the International Research Institute for Climate and Society (IRI). They are included into the Analyses Standardized Precipitation Index (SPI) project (iridl.ldeo.columbia.edu/SOURCES/.IRI/.Analyses/.SPI/). The third dataset was recovered from the Standardized Precipitation Evaporation Index (SPEI) Monitor (digital.csic.es/handle/10261/128892). PADs were found suitable to identify the spatio-temporal variability and prone areas of drought. Drought trends were visually detected by using both PADs and RDs. A similar approach can be followed to include other types of graphs to deal with the analysis of water-related big data. Key words: Big data, data visualization, drought, SPI, SPEI

  18. Curiosity Self-Portrait at Big Sky Drilling Site

    NASA Image and Video Library

    2015-10-13

    This self-portrait of NASA's Curiosity Mars rover shows the vehicle at the "Big Sky" site, where its drill collected the mission's fifth taste of Mount Sharp. The scene combines dozens of images taken during the 1,126th Martian day, or sol, of Curiosity's work during Mars (Oct. 6, 2015, PDT), by the Mars Hand Lens Imager (MAHLI) camera at the end of the rover's robotic arm. The rock drilled at this site is sandstone in the Stimson geological unit inside Gale Crater. The location is on cross-bedded sandstone in which the cross bedding is more evident in views from when the rover was approaching the area, such as PIA19818. The view is centered toward the west-northwest. It does not include the rover's robotic arm, though the shadow of the arm is visible on the ground. Wrist motions and turret rotations on the arm allowed MAHLI to acquire the mosaic's component images. The arm was positioned out of the shot in the images, or portions of images, that were used in this mosaic. This process was used previously in acquiring and assembling Curiosity self-portraits taken at sample-collection sites "Rocknest" (PIA16468), "John Klein" (PIA16937) and "Windjana" (PIA18390). This portrait of the rover was designed to show the Chemistry and Camera (ChemCam) instrument atop the rover appearing level. This causes the horizon to appear to tilt toward the left, but in reality it is fairly flat. For scale, the rover's wheels are 20 inches (50 centimeters) in diameter and about 16 inches (40 centimeters) wide. The drilled hole in the rock, appearing grey near the lower left corner of the image, is 0.63 inch (1.6 centimeters) in diameter. MAHLI was built by Malin Space Science Systems, San Diego. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19920

  19. One year on VESPA, a community-driven Virtual Observatory in Planetary Science

    NASA Astrophysics Data System (ADS)

    Erard, S.; Cecconi, B.; Le Sidaner, P.; Rossi, A. P.; Capria, M. T.; Schmitt, B.; Andre, N.; Vandaele, A. C.; Scherf, M.; Hueso, R.; Maattanen, A. E.; Thuillot, W.; Achilleos, N.; Marmo, C.; Santolik, O.; Benson, K.

    2016-12-01

    The Europlanet H2020 program started on 1/9/2015 for 4 years. It includes an activity to adapt Virtual Observatory (VO) techniques to Planetary Science data called VESPA. The objective is to facilitate searches in big archives as well as sparse databases, to provide simple data access and on-line visualization, and to allow small data providers to make their data available in an interoperable environment with minimum effort. The VESPA system, based on a prototype developed in a previous program [1], has been hugely improved during the first year of Europlanet H2020: the infrastructure has been upgraded to describe data in many fields more accurately; the main user search interface (http://vespa.obspm.fr) has been redesigned to provide more flexibility; alternative ways to access Planetary Science data services from VO tools are being implemented in addition to receiving data from the main interface; VO tools are being improved to handle specificities of Solar System data, e.g. measurements in reflected light, coordinate systems, etc. Existing data services have been updated, and new ones have been designed. The global objective (50 data services) is already overstepped, with 54 services open or being finalized. A procedure to install data services has been documented, and hands-on sessions are organized twice a year at EGU and EPSC; this is intended to favour the installation of services by individual research teams, e.g. to distribute derived data related to a published study. In complement, regular discussions are held with big data providers, starting with space agencies (IPDA). Common projects with ESA and NASA's PDS have been engaged, which should lead to a connection between PDS4 and EPN-TAP. In parallel, a Solar System Interest Group has been decided in IVOA; the goal is here to adapt existing astronomy standards to Planetary Science.Future steps will include the development of a connection between the VO world and GIS tools, and integration of Heliophysics, planetary plasma and mineral spectroscopy data. The Europlanet 2020 Research Infrastructure project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 654208. [1] Erard et al 2014, Astronomy & Computing 7-8, 71-80. http://arxiv.org/abs/1407.4886

  20. Progress on VESPA, a community-driven Virtual Observatory in Planetary Science

    NASA Astrophysics Data System (ADS)

    Erard, S.; Cecconi, B.; Le Sidaner, P.; Rossi, A. P.; Capria, M. T.; Schmitt, B.; Genot, V. N.; André, N.; Vandaele, A. C.; Scherf, M.; Hueso, R.; Maattanen, A. E.; Carry, B.; Achilleos, N.; Marmo, C.; Santolik, O.; Benson, K.; Fernique, P.

    2017-12-01

    The Europlanet H2020 program started on 1/9/2015 for 4 years. It includes an activity to adapt Virtual Observatory (VO) techniques to Planetary Science data called VESPA. The objective is to facilitate searches in big archives as well as sparse databases, to provide simple data access and on-line visualization, and to allow small data providers to make their data available in an interoperable environment with minimum effort. The VESPA system, based on a prototype developed in a previous program [1], has been hugely improved during the first two years of Europlanet H2020: the infrastructure has been upgraded to describe data in many fields more accurately; the main user search interface (http://vespa.obspm.fr) has been redesigned to provide more flexibility; alternative ways to access Planetary Science data services from VO tools have been implemented; VO tools are being improved to handle specificities of Solar System data, e.g. measurements in reflected light, coordinate systems, etc. Current steps include the development of a connection between the VO world and GIS tools, and integration of Heliophysics, planetary plasmas, and mineral spectroscopy data to support of the analysis of observations. Existing data services have been updated, and new ones have been designed. The global objective is already overstepped, with 34 services open and 20 more being finalized. A procedure to install data services has been documented, and hands-on sessions are organized twice a year at EGU and EPSC; this is intended to favour the installation of services by individual research teams, e.g. to distribute derived data related to a published study. In complement, regular discussions are held with big data providers, starting with space agencies (IPDA). Common projects with ESA and NASA's PDS have been engaged, with the goal to connect PDS4 and EPN-TAP. In parallel, a Solar System Interest Group has just been started in IVOA; the goal is here to adapt existing astronomy standards to Planetary Science. The Europlanet 2020 Research Infrastructure project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 654208. [1] Erard et al 2014, Astronomy & Computing 7-8, 71-80. http://arxiv.org/abs/1407.4886

  1. Environmental Data-Driven Inquiry and Exploration (EDDIE)- Water Focused Modules for interacting with Big Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Meixner, T.; Gougis, R.; O'Reilly, C.; Klug, J.; Richardson, D.; Castendyk, D.; Carey, C.; Bader, N.; Stomberg, J.; Soule, D. C.

    2016-12-01

    High-frequency sensor data are driving a shift in the Earth and environmental sciences. The availability of high-frequency data creates an engagement opportunity for undergraduate students in primary research by using large, long-term, and sensor-based, data directly in the scientific curriculum. Project EDDIE (Environmental Data-Driven Inquiry & Exploration) has developed flexible classroom activity modules designed to meet a series of pedagogical goals that include (1) developing skills required to manipulate large datasets at different scales to conduct inquiry-based investigations; (2) developing students' reasoning about statistical variation; and (3) fostering accurate student conceptions about the nature of environmental science. The modules cover a wide range of topics, including lake physics and metabolism, stream discharge, water quality, soil respiration, seismology, and climate change. In this presentation we will focus on a sequence of modules of particular interest to hydrologists - stream discharge, water quality and nutrient loading. Assessment results show that our modules are effective at making students more comfortable analyzing data, improved understanding of statistical concepts, and stronger data analysis capability. This project is funded by an NSF TUES grant (NSF DEB 1245707).

  2. Extending the mind: a review of ethnographies of neuroscience practice.

    PubMed

    Mahfoud, Tara

    2014-01-01

    THIS PAPER REVIEWS ETHNOGRAPHIES OF NEUROSCIENCE LABORATORIES IN THE UNITED STATES AND EUROPE, ORGANIZING THEM INTO THREE MAIN SECTIONS: (1) descriptions of the capabilities and limitations of technologies used in neuroimaging laboratories to map "activity" or "function" onto structural models of the brain; (2) discussions of the "distributed" or "extended" mind in neuroscience practice; and (3) the implications of neuroscience research and the power of brain images outside the laboratory. I will try to show the importance of ethnographic work in such settings, and place this body of ethnographic work within its historical framework-such ethnographies largely emerged within the Decade of the Brain, as announced by former President of the United States George H. W. Bush in 1990. The main argument is that neuroscience research and the context within which it is taking place has changed since the 1990's-specifically with the launch of "big science" projects such as the Human Brain Project (HBP) in the European Union and the BRAIN initiative in the United States. There is an opportunity for more research into the institutional and politico-economic context within which neuroscience research is taking place, and for continued engagement between the social and biological sciences.

  3. Telecom Big Data for Urban Transport Analysis - a Case Study of Split-Dalmatia County in Croatia

    NASA Astrophysics Data System (ADS)

    Baučić, M.; Jajac, N.; Bućan, M.

    2017-09-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the "IPA Adriatic CBC//N.0086/INTERMODAL" project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  4. Vertical landscraping, a big regionalism for Dubai.

    PubMed

    Wilson, Matthew

    2010-01-01

    Dubai's ecologic and economic complications are exacerbated by six years of accelerated expansion, a fixed top-down approach to urbanism and the construction of iconic single-phase mega-projects. With recent construction delays, project cancellations and growing landscape issues, Dubai's tower typologies have been unresponsive to changing environmental, socio-cultural and economic patterns (BBC, 2009; Gillet, 2009; Lewis, 2009). In this essay, a theory of "Big Regionalism" guides an argument for an economically and ecologically linked tower typology called the Condenser. This phased "box-to-tower" typology is part of a greater Landscape Urbanist strategy called Vertical Landscraping. Within this strategy, the Condenser's role is to densify the city, facilitating the creation of ecologic voids that order the urban region. Delineating "Big Regional" principles, the Condenser provides a time-based, global-local urban growth approach that weaves Bigness into a series of urban-regional, economic and ecological relationships, builds upon the environmental performance of the city's regional architecture and planning, promotes a continuity of Dubai's urban history, and responds to its landscape issues while condensing development. These speculations permit consideration of the overlooked opportunities embedded within Dubai's mega-projects and their long-term impact on the urban morphology.

  5. Big muddy: can a chemical flood breathe new life into a tired old giant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-06-01

    A 9-year, $35.5-million tertiary recovery project has been begun in the Big Muddy Field in Wyoming. It will evaluate a chemical flooding process employing an aqueous surfactant slug followed by polymer. (DLC)

  6. Assessing the Use of Tactical Clouds to Enhance Warfighter Effectiveness

    DTIC Science & Technology

    2014-04-01

    operating while compromised environment” (attackers with access to communications network); Big Data – Since 9/11 the amount of surveillance data ...unanalyzed. The big data problem is unlikely to improve as it is projected that sensor data volume could potentially be measured in yottabytes (1024...www.forbes.com/sites/techonomy/2012/03/12/military-intelligence-redefined- big - data -in-the- battlefield/ 5 Data Analysis Challenges [Reference 5] DRDC-RDDC

  7. Project EDDIE: Improving Big Data skills in the classroom

    NASA Astrophysics Data System (ADS)

    Soule, D. C.; Bader, N.; Carey, C.; Castendyk, D.; Fuller, R.; Gibson, C.; Gougis, R.; Klug, J.; Meixner, T.; Nave, L. E.; O'Reilly, C.; Richardson, D.; Stomberg, J.

    2015-12-01

    High-frequency sensor-based datasets are driving a paradigm shift in the study of environmental processes. The online availability of high-frequency data creates an opportunity to engage undergraduate students in primary research by using large, long-term, and sensor-based, datasets for science courses. Project EDDIE (Environmental Data-Driven Inquiry & Exploration) is developing flexible classroom activity modules designed to (1) improve quantitative and reasoning skills; (2) develop the ability to engage in scientific discourse and argument; and (3) increase students' engagement in science. A team of interdisciplinary faculty from private and public research universities and undergraduate institutions have developed these modules to meet a series of pedagogical goals that include (1) developing skills required to manipulate large datasets at different scales to conduct inquiry-based investigations; (2) developing students' reasoning about statistical variation; and (3) fostering accurate student conceptions about the nature of environmental science. The modules cover a wide range of topics, including lake physics and metabolism, stream discharge, water quality, soil respiration, seismology, and climate change. Assessment data from questionnaire and recordings collected during the 2014-2015 academic year show that our modules are effective at making students more comfortable analyzing data. Continued development is focused on improving student learning outcomes with statistical concepts like variation, randomness and sampling, and fostering scientific discourse during module engagement. In the coming year, increased sample size will expand our assessment opportunities to comparison groups in upper division courses and allow for evaluation of module-specific conceptual knowledge learned. This project is funded by an NSF TUES grant (NSF DEB 1245707).

  8. Assessment of Provisional MODIS-derived Surfaces Related to the Global Carbon Cycle

    NASA Astrophysics Data System (ADS)

    Cohen, W. B.; Maiersperger, T. K.; Turner, D. P.; Gower, S. T.; Kennedy, R. E.; Running, S. W.

    2002-12-01

    The global carbon cycle is one of the most important foci of an emerging global biosphere monitoring system. A key component of such a system is the MODIS sensor, onboard the Terra satellite platform. Biosphere monitoring requires an integrated program of satellite observations, Earth-system models, and in situ data. Related to the carbon cycle, MODIS science teams routinely develop a variety of global surfaces such as land cover, leaf area index, and net primary production using MODIS data and functional algorithms. The quality of these surfaces must be evaluated to determine their effectiveness for global biosphere monitoring. A project called BigFoot (http://www.fsl.orst.edu/larse/bigfoot/) is an organized effort across nine biomes to assess the quality of the abovementioned surfaces: (1) Arctic tundra; (2) boreal evergreen needle-leaved forest; temperate (3) cropland, (4) grassland, (5) evergreen needle-leaved forest, and (6) deciduous broad-leaved forest; desert (7) grassland and (8) shrubland; and (9) tropical evergreen broad-leaved forest. Each biome is represented by a site that has an eddy-covariance flux tower that measures water vapor and CO2 fluxes. Flux tower footprints are relatively small-approximately 1 km2. BigFoot characterizes 25 km2 around each tower, using field data, Landsat ETM+ image data, and ecosystem process models. Our innovative field sampling design incorporates a nested spatial series to facilitate geostatistical analyses, samples the ecological variability at a site, and is logistically efficient. Field data are used both to develop site-specific algorithms for mapping/modeling the variables of interest and to characterize the errors in derived BigFoot surfaces. Direct comparisons of BigFoot- and MODIS-derived surfaces are made to help understand the sources of error in MODIS-derived surfaces and to facilitate improvements to MODIS algorithms. Results from four BigFoot sites will be presented.

  9. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Architecture

    NASA Astrophysics Data System (ADS)

    Xiao, J.; Yu, C.; Cui, C.; He, B.; Li, C.; Fan, D.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Zhang, H.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). The ultimate goal of this project is to provide a comprehensive end-to-end astronomy research environment where several independent systems seamlessly collaborate to support the full lifecycle of the modern observational astronomy based on big data, from proposal submission, to data archiving, data release, and to in-situ data analysis and processing. In this paper, the architecture and key designs of the AstroCloud platform are introduced, including data access middleware, access control and security framework, extendible proposal workflow, and system integration mechanism.

  10. The space race and biodefense: lessons from NASA about big science and the role of medical informatics.

    PubMed

    Wagner, Michael M

    2002-01-01

    The events that followed the launch of Sputnik on Oct 4, 1957, provide a metaphor for the events that are following the first bioterroristic case of pulmonary anthrax in the United States. This paper uses that metaphor to elucidate the nature of the task ahead and to suggest questions such as, Can the goals of the biodefense effort be formulated as concisely and concretely as the goal of the space program? Can we measure success in biodefense as we did for the space project? What are the existing resources that are the equivalents of propulsion systems and rocket engineers that can be applied to the problems of biodefense?

  11. INDIGO-DataCloud solutions for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin

    2017-04-01

    INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.

  12. Lab-in-a-box @ school: Exiting hands-on experiments in soft matter physics

    NASA Astrophysics Data System (ADS)

    Jacobs, Karin; Brinkmann, Martin; Müller, Frank

    2015-03-01

    Soft materials like liquids and polymers are part of everyday life, yet at school, this topic is rarely touched. Within the priority program SPP 1064 'Nano- and Microfluidics' of the German Science Foundation, we designed an outreach project that allows pupils (age 14 to 18) to perform hands-on experiments (www.labinabox.de). The experiments allow them e.g. to feel viscosity and viscoelasticity, experience surface tension or see structure formation. We call the modus operandi 'subjective experiments' to contrast them with the scientifically objective experiments, which pupils often describe as being boring. Over a dozen different experiments under the topic 'physics of fluids' are collected in a big box that travels to the school. Three other topics of boxes are available, 'physics of light, 'physics of liquid crystals', and 'physics of adhesion and friction'. Each experiment can be performed by 1-3 pupils within 10 - 20 min. That way, each scholar can perform 6 to 8 different small experiments within one topic. 'Subjective experiments' especially catch the attention of girls without disadvantaging boys. Both are fascinated by the hands-on physics experience and are therefore eager to perform also 'boring' objective experiments. Morover, before/after polls reveal that their interest in physics has greatly advanced. The project can easily be taken over and/or adapted to other topics in the natural sciences. Financial support of the German Science Foundation DFG is acknowledged.

  13. Aquatic Sciences and Its Appeal for Expeditionary Research Science Education

    NASA Astrophysics Data System (ADS)

    Aguilar, C.; Cuhel, R. L.

    2016-02-01

    Our multi-program team studies aim to develop specific "hard" and "soft" STEM skills that integrate, literally, both disciplinary and socio-economic aspects of students lives to include peer mentoring, advisement, enabling, and professional mentorship, as well as honestly productive, career-developing hands-on research. Specifically, we use Interdependent, multidisciplinary research experiences; Development and honing of specific disciplinary skill (you have to have something TO network); Use of skill in a team to produce big picture product; Interaction with varied, often outside professionals; in order to Finish with self-confidence and a marketable skill. In a given year our umbrella projects involve linked aquatic science disciplines: Analytical Chemistry; Geology; Geochemistry; Microbiology; Engineering (Remotely Operated Vehicles); and recently Policy (scientist-public engagement). We especially use expeditionary research activities aboard our research vessel in Lake Michigan, during which (a dozen at a time, from multiple programs) students: Experience ocean-scale research cruise activities; Apply a learned skill in real time to characterize a large lake; Participate in interdisciplinary teamwork; Learn interactions among biology, chemistry, geology, optics, physics for diverse aquatic habitats; and, importantly, Experience leadership as "Chief Scientist-for-a-station". These team efforts achieve beneficial outcomes: Develop self-confidence in application of skills; Enable expression of leadership capabilities; Provide opportunity to assess "love of big water"; Produce invaluable long-term dataset for the studied region (our benefit); and they are Often voted as a top influence for career decisions. These collectively have led to some positive outcomes for "historical" undergraduate participants - more than half in STEM graduate programs, only a few not still involved in a STEM career at some level, or involved as for example a lawyer in environmental policy.

  14. Improving Science Literacy and Earth Science Awareness Through an Intensive Summer Research Experience in Paleobiology

    NASA Astrophysics Data System (ADS)

    Heim, N. A.; Saltzman, J.; Payne, J.

    2014-12-01

    The chasm between classroom science and scientific research is bridged in the History of Life Internships at Stanford University. The primary foci of the internships are collection of new scientific data and original scientific research. While traditional high school science courses focus on learning content and laboratory skills, students are rarely engaged in real scientific research. Even in experiential learning environments, students investigate phenomena with known outcomes under idealized conditions. In the History of Life Internships, high school youth worked full time during the summers of 2013 and 2014 to collect body size data on fossil Echinoderms and Ostracods, measuring more than 20,000 species in total. These data are contributed to the larger research efforts in the Stanford Paleobiology Lab, but they also serve as a source of data for interns to conduct their own scientific research. Over the course of eight weeks, interns learn about previous research on body size evolution, collect data, develop their own hypotheses, test their hypotheses, and communicate their results to their peers and the larger scientific community: the 2014 interns have submitted eight abstracts to this meeting for the youth session entitled Bright STaRS where they will present their research findings. Based on a post-internship survey, students in the 2013 History of Life cohort had more positive attitudes towards science and had a better understanding of how to conduct scientific research compared to interns in the Earth Sciences General Internship Program, where interns typically do not complete their own research project from start to finish. In 2014, we implemented both pre- and post-internship surveys to determine if these positive attitudes were developed over the course of the internship. Conducting novel research inspires both the students and instructors. Scientific data collection often involves many hours of repetitive work, but answering big questions typically requires big datasets. Our team of 20 used calipers and data-rich compendia of fossil species to collect copious amounts of data. Our interns experienced the joys, frustrations, tedium and excitement of being scientists and discovering something new about the natural world for the first time.

  15. The Northeast Climate Science Center

    NASA Astrophysics Data System (ADS)

    Ratnaswamy, M. J.; Palmer, R. N.; Morelli, T.; Staudinger, M.; Holland, A. R.

    2013-12-01

    The Department of Interior Northeast Climate Science Center (NE CSC) is part of a federal network of eight Climate Science Centers created to provide scientific information, tools, and techniques that managers and other parties interested in land, water, wildlife and cultural resources can use to anticipate, monitor, and adapt to climate change. Recognizing the critical threats, unique climate challenges, and expansive and diverse nature of the northeast region, the University of Massachusetts Amherst, College of Menominee Nation, Columbia University, Marine Biological Laboratory, University of Minnesota, University of Missouri Columbia, and University of Wisconsin-Madison have formed a consortium to host the NE CSC. This partnership with the U.S. Geological Survey climate science center network provides wide-reaching expertise, resources, and established professional collaborations in both climate science and natural and cultural resources management. This interdisciplinary approach is needed for successfully meeting the regional needs for climate impact assessment, adaptive management, education, and stakeholder outreach throughout the northeast region. Thus, the NE CSC conducts research, both through its general funds and its annual competitive award process, that responds to the needs of natural resource management partners that exist, in part or whole, within the NE CSC bounds. This domain includes the North Atlantic, Upper Midwest and Great Lakes, Eastern Tallgrass and Big Rivers, and Appalachian Landscape Conservation Cooperatives (LCCs), among other management stakeholders. For example, researchers are developing techniques to monitor tree range dynamics as affected by natural disturbances which can enable adaptation of projected climate impacts; conducting a Designing Sustainable Landscapes project to assess the capability of current and potential future landscapes in the Northeast to provide integral ecosystems and suitable habitat for a suite of representative species and provide guidance for strategic habitat conservation; studying the effects of changes in the frequency and magnitude of drought and stream temperature on brook trout habitats, spatial distribution and population persistence; and conducting assessments of northeastern regional climate projections and high-resolution downscaling.

  16. Getting Open Source Right for Big Data Analytics: Software Sharing, Governance, Collaboration and Most of All, Fun!

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2013-12-01

    A wave of open source big data analytic infrastructure is currently shaping government, private sector, and academia. Projects are consuming, adapting, and contributing back to various ecosystems of software e.g., the Apache Hadoop project and its ecosystem of related efforts including Hive, HBase, Pig, Oozie, Ambari, Knox, Tez and Yarn, to name a few; the Berkeley AMPLab stack which includes Spark, Shark, Mesos, Tachyon, BlinkDB, MLBase, and other emerging efforts; MapR and its related stack of technologies, offerings from commercial companies building products around these tools e.g., Hortonworks Data Platform (HDP), Cloudera's CDH project, etc. Though the technologies all offer different capabilities including low latency support/in-memory, versus record oriented file I/O, high availability, support for the Map Reduce programming paradigm or other dataflow/workflow constructs, there is a common thread that binds these products - they are all released under an open source license e.g., Apache2, MIT, BSD, GPL/LGPL, etc.; all thrive in various ecosystems, such as Apache, or Berkeley AMPLab; all are developed collaboratively, and all technologies provide plug in architecture models and methodologies for allowing others to contribute, and participate via various community models. This talk will cover the open source aspects and governance aspects of the aforementioned Big Data ecosystems and point out the differences, subtleties, and implications of those differences. The discussion will be by example, using several national deployments and Big Data initiatives stemming from the Administration including DARPA's XDATA program; NASA's CMAC program; NSF's EarthCube and geosciences BigData projects. Lessons learned from these efforts in terms of the open source aspects of these technologies will help guide the AGU community in their use, deployment and understanding.

  17. Open source software projects of the caBIG In Vivo Imaging Workspace Software special interest group.

    PubMed

    Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence

    2007-11-01

    The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.

  18. Measuring Adolescent Science Motivation

    ERIC Educational Resources Information Center

    Schumm, Maximiliane F.; Bogner, Franz X.

    2016-01-01

    To monitor science motivation, 232 tenth graders of the college preparatory level ("Gymnasium") completed the Science Motivation Questionnaire II (SMQ-II). Additionally, personality data were collected using a 10-item version of the Big Five Inventory. A subsequent exploratory factor analysis based on the eigenvalue-greater-than-one…

  19. Breaking BAD: A Data Serving Vision for Big Active Data

    PubMed Central

    Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.

    2017-01-01

    Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377

  20. The Terra Data Fusion Project: An Update

    NASA Astrophysics Data System (ADS)

    Di Girolamo, L.; Bansal, S.; Butler, M.; Fu, D.; Gao, Y.; Lee, H. J.; Liu, Y.; Lo, Y. L.; Raila, D.; Turner, K.; Towns, J.; Wang, S. W.; Yang, K.; Zhao, G.

    2017-12-01

    Terra is the flagship of NASA's Earth Observing System. Launched in 1999, Terra's five instruments continue to gather data that enable scientists to address fundamental Earth science questions. By design, the strength of the Terra mission has always been rooted in its five instruments and the ability to fuse the instrument data together for obtaining greater quality of information for Earth Science compared to individual instruments alone. As the data volume grows and the central Earth Science questions move towards problems requiring decadal-scale data records, the need for data fusion and the ability for scientists to perform large-scale analytics with long records have never been greater. The challenge is particularly acute for Terra, given its growing volume of data (> 1 petabyte), the storage of different instrument data at different archive centers, the different file formats and projection systems employed for different instrument data, and the inadequate cyberinfrastructure for scientists to access and process whole-mission fusion data (including Level 1 data). Sharing newly derived Terra products with the rest of the world also poses challenges. As such, the Terra Data Fusion Project aims to resolve two long-standing problems: 1) How do we efficiently generate and deliver Terra data fusion products? 2) How do we facilitate the use of Terra data fusion products by the community in generating new products and knowledge through national computing facilities, and disseminate these new products and knowledge through national data sharing services? Here, we will provide an update on significant progress made in addressing these problems by working with NASA and leveraging national facilities managed by the National Center for Supercomputing Applications (NCSA). The problems that we faced in deriving and delivering Terra L1B2 basic, reprojected and cloud-element fusion products, such as data transfer, data fusion, processing on different computer architectures, science, and sharing, will be presented with quantitative specifics. Results from several science-specific drivers for Terra fusion products will also be presented. We demonstrate that the Terra Data Fusion Project itself provides an excellent use-case for the community addressing Big Data and cyberinfrastructure problems.

  1. Technology for Mining the Big Data of MOOCs

    ERIC Educational Resources Information Center

    O'Reilly, Una-May; Veeramachaneni, Kalyan

    2014-01-01

    Because MOOCs bring big data to the forefront, they confront learning science with technology challenges. We describe an agenda for developing technology that enables MOOC analytics. Such an agenda needs to efficiently address the detailed, low level, high volume nature of MOOC data. It also needs to help exploit the data's capacity to reveal, in…

  2. Big Ideas at the Center for Innovation in Education at Thomas College

    ERIC Educational Resources Information Center

    Prawat, Ted

    2016-01-01

    Schools and teachers are looking for innovative ways to teach the "big ideas" emerging in the core curricula, especially in STEAM fields (science technology, engineering, arts and math). As a result, learning environments that support digital learning and educational technology on various platforms and devices are taking on…

  3. The Big Bang: UK Young Scientists' and Engineers' Fair 2010

    ERIC Educational Resources Information Center

    Allison, Simon

    2010-01-01

    The Big Bang: UK Young Scientists' and Engineers' Fair is an annual three-day event designed to promote science, technology, engineering and maths (STEM) careers to young people aged 7-19 through experiential learning. It is supported by stakeholders from business and industry, government and the community, and brings together people from various…

  4. Harnessing the power of big data: infusing the scientific method with machine learning to transform ecology

    USDA-ARS?s Scientific Manuscript database

    Most efforts to harness the power of big data for ecology and environmental sciences focus on data and metadata sharing, standardization, and accuracy. However, many scientists have not accepted the data deluge as an integral part of their research because the current scientific method is not scalab...

  5. Combustion Science for Cleaner Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Musahid

    2014-10-17

    Musahid Ahmed discusses how he and his team use the Advanced Light Source (ALS) to study combustion chemistry at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  6. Combustion Science for Cleaner Fuels

    ScienceCinema

    Ahmed, Musahid

    2018-01-16

    Musahid Ahmed discusses how he and his team use the Advanced Light Source (ALS) to study combustion chemistry at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  7. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology.

    PubMed

    Salazar, Brittany M; Balczewski, Emily A; Ung, Choong Yong; Zhu, Shizhen

    2016-12-27

    Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring "big data" applications in pediatric oncology. Computational strategies derived from big data science-network- and machine learning-based modeling and drug repositioning-hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which "big data" and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  8. The dynamics of big data and human rights: the case of scientific research.

    PubMed

    Vayena, Effy; Tasioulas, John

    2016-12-28

    In this paper, we address the complex relationship between big data and human rights. Because this is a vast terrain, we restrict our focus in two main ways. First, we concentrate on big data applications in scientific research, mostly health-related research. And, second, we concentrate on two human rights: the familiar right to privacy and the less well-known right to science. Our contention is that human rights interact in potentially complex ways with big data, not only constraining it, but also enabling it in various ways; and that such rights are dynamic in character, rather than fixed once and for all, changing in their implications over time in line with changes in the context we inhabit, and also as they interact among themselves in jointly responding to the opportunities and risks thrown up by a changing world. Understanding this dynamic interaction of human rights is crucial for formulating an ethic tailored to the realities-the new capabilities and risks-of the rapidly evolving digital environment.This article is part of the themed issue 'The ethical impact of data science'. © 2016 The Author(s).

  9. BIG: a large-scale data integration tool for renal physiology

    PubMed Central

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya

    2016-01-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488

  10. Commentary: Epidemiology in the era of big data.

    PubMed

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  11. Epidemiology in the Era of Big Data

    PubMed Central

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  12. Art, science, and immersion: data-driven experiences

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Monroe, Laura; Ford Morie, Jacquelyn; Aguilera, Julieta

    2013-03-01

    This panel and dialog-paper explores the potentials at the intersection of art, science, immersion and highly dimensional, "big" data to create new forms of engagement, insight and cultural forms. We will address questions such as: "What kinds of research questions can be identified at the intersection of art + science + immersive environments that can't be expressed otherwise?" "How is art+science+immersion distinct from state-of-the art visualization?" "What does working with immersive environments and visualization offer that other approaches don't or can't?" "Where does immersion fall short?" We will also explore current trends in the application of immersion for gaming, scientific data, entertainment, simulation, social media and other new forms of big data. We ask what expressive, arts-based approaches can contribute to these forms in the broad cultural landscape of immersive technologies.

  13. Second BRITE-Constellation Science Conference: Small satellites—big science, Proceedings of the Polish Astronomical Society volume 5

    NASA Astrophysics Data System (ADS)

    Zwintz, Konstanze; Poretti, Ennio

    2017-09-01

    In 2016 the BRITE-Constellation mission had been operational for more than two years. At that time, several hundreds of bright stars of various types had been observed successfully in the two BRITE lters and astonishing new discoveries had been made. Therefore, the time was ripe to host the Second BRITE-Constellation Science Conference: Small satellites | big science" from August 22 to 26, 2016, in the beautiful Madonnensaal of the University of Innsbruck, Austria. With this conference, we brought together the scientic community interested in BRITE-Constellation, pro- vided an update on the status of the mission, presented and discussed latest scientic results, shared our experiences with the data, illustrated successful cooperations between professional and amateur ground-based observers and BRITE scientists, and explored new ideas for future BRITE-Constellation observations.

  14. Data management by using R: big data clinical research series.

    PubMed

    Zhang, Zhongheng

    2015-11-01

    Electronic medical record (EMR) system has been widely used in clinical practice. Instead of traditional record system by hand writing and recording, the EMR makes big data clinical research feasible. The most important feature of big data research is its real-world setting. Furthermore, big data research can provide all aspects of information related to healthcare. However, big data research requires some skills on data management, which however, is always lacking in the curriculum of medical education. This greatly hinders doctors from testing their clinical hypothesis by using EMR. To make ends meet, a series of articles introducing data management techniques are put forward to guide clinicians to big data clinical research. The present educational article firstly introduces some basic knowledge on R language, followed by some data management skills on creating new variables, recoding variables and renaming variables. These are very basic skills and may be used in every project of big data research.

  15. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Astrophysics Data System (ADS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-05-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  16. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Technical Reports Server (NTRS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  17. How to Visualize and Communicate Challenges in Climate and Environmental Sciences?

    NASA Astrophysics Data System (ADS)

    Vicari, R.; Schertzer, D. J. M.; Deutsch, J. C.

    2014-12-01

    The challenges of climate and environmental sciences need a renewed dialogue with a large spectrum of stakeholders, ranging from the general publics to specialists. This requires a better use of sophisticated visualization techniques to both forward the information and to follow the corresponding flow of information. A particular case of interest is the question of resilience to extreme weather events that also relies on increasing awareness of urban communities. This research looks at the development of exploration techniques of unstructured Big Data. Indeed access to information on environmental and climate sciences has hugely increased in terms of variety and quantity, as a consequence of different factors, among others the development of public relations by research institutes and the pervasive role of digital media (Bucchi 2013; Trench 2008). We are left with unthinkable amounts of information from blogs, social networks postings, public speeches, press releases, articles, etc. It is possible now to explore and visualize patterns followed by digital information with the support of automated analysis tools. On the other hand these techniques can provide important insights on how different techniques of visual communication can impact on urban resilience to extreme weather. The selected case studies correspond to several research projects under the umbrella of the Chair "Hydrology for resilient cities" aimed to develop and test new solutions in urban hydrology that will contribute to the resilience of our cities to extreme weather. These research projects - ranging from regional projects (e.g. RadX@IdF), European projects (e.g. Blue Green Dream and RainGain), to worldwide collaborations (e.g. TOMACS) - include awareness raising and capacity building activities aimed to foster cooperation between scientists, professionals, and beneficiaries. This presentation will explore how visualization techniques can be used in the above mentioned projects in order to support outreach activities as well as to illustrate the impact of digital communication on urban resilience.

  18. Zooming in on Landing Site

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on the image for movie of Zooming in on Landing Site

    This animation zooms in on the area on Mars where NASA's Phoenix Mars Lander will touchdown on May 25, 2008. The image was taken by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter.

    The first shot shows the spacecraft's landing ellipse in green, the area where Phoenix has a high probability of landing. It then zooms in to show the region's arctic terrain. This polar landscape is relatively free of rocks, with only about 1 to 2 rocks 1.5 meters (4.9 feet) or larger in an area about as big as two football fields.

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo.

  19. A Hybrid Cloud Computing Service for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, C. P.

    2016-12-01

    Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.

  20. The Big Bang, Genesis, and Knocking on Heaven's Door

    NASA Astrophysics Data System (ADS)

    Gentry, Robert

    2012-03-01

    Michael Shermer recently upped the ante in the big bang-Genesis controversy by citing Lisa Randall's provocative claim (Science 334, 762 (2011)) that ``it is inconceivable that God could continue to intervene without introducing a material trace of his actions.'' So does Randall's and Shermer's agreement that no such evidence exists disprove God's existence? Not in my view because my 1970s Science, Nature and ARNS publications, and my article in the 1982 AAAS Western Division's Symposium Proceedings, Evolution Confronts Creation, all contain validation of God's existence via discovery of His Fingerprints of Creation and falsification of the big bang and geological evolution. These results came to wide public/scientific attention in my testimony at the 1981 Arkansas creation/evolution trial. There ACLU witness G Brent Dalrymple from the USGS -- and 2005 Medal of Science recipient from President Bush -- admitted I had discovered a tiny mystery (primordial polonium radiohalos) in granite rocks that indicated their almost instant creation. As a follow-up in 1992 and 1995 he sent out SOS letters to the entire AGU membership that the polonium halo evidence for fiat creation still existed and that someone needed to urgently find a naturalistic explanation for them. Is the physics community guilty of a Watergate-type cover-up of this discovery of God's existence and falsification of the big bang? For the answer see www.halos.tv.

  1. On Establishing Big Data Wave Breakwaters with Analytics (Invited)

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2013-12-01

    The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.

  2. Big data in medical science--a biostatistical view.

    PubMed

    Binder, Harald; Blettner, Maria

    2015-02-27

    Inexpensive techniques for measurement and data storage now enable medical researchers to acquire far more data than can conveniently be analyzed by traditional methods. The expression "big data" refers to quantities on the order of magnitude of a terabyte (1012 bytes); special techniques must be used to evaluate such huge quantities of data in a scientifically meaningful way. Whether data sets of this size are useful and important is an open question that currently confronts medical science. In this article, we give illustrative examples of the use of analytical techniques for big data and discuss them in the light of a selective literature review. We point out some critical aspects that should be considered to avoid errors when large amounts of data are analyzed. Machine learning techniques enable the recognition of potentially relevant patterns. When such techniques are used, certain additional steps should be taken that are unnecessary in more traditional analyses; for example, patient characteristics should be differentially weighted. If this is not done as a preliminary step before similarity detection, which is a component of many data analysis operations, characteristics such as age or sex will be weighted no higher than any one out of 10 000 gene expression values. Experience from the analysis of conventional observational data sets can be called upon to draw conclusions about potential causal effects from big data sets. Big data techniques can be used, for example, to evaluate observational data derived from the routine care of entire populations, with clustering methods used to analyze therapeutically relevant patient subgroups. Such analyses can provide complementary information to clinical trials of the classic type. As big data analyses become more popular, various statistical techniques for causality analysis in observational data are becoming more widely available. This is likely to be of benefit to medical science, but specific adaptations will have to be made according to the requirements of the applications.

  3. Hydrologic, vegetation, and soil data collected in selected wetlands of the Big River Management area, Rhode Island, from 2008 through 2010

    USGS Publications Warehouse

    Borenstein, Meredith S.; Golet, Francis C.; Armstrong, David S.; Breault, Robert F.; McCobb, Timothy D.; Weiskel, Peter K.

    2012-01-01

    The Rhode Island Water Resources Board planned to develop public water-supply wells in the Big River Management Area in Kent County, Rhode Island. Research in the United States and abroad indicates that groundwater withdrawal has the potential to affect wetland hydrology and related processes. In May 2008, the Rhode Island Water Resources Board, the U.S. Geological Survey, and the University of Rhode Island formed a partnership to establish baseline conditions at selected Big River wetland study sites and to develop an approach for monitoring potential impacts once pumping begins. In 2008 and 2009, baseline data were collected on the hydrology, vegetation, and soil characteristics at five forested wetland study sites in the Big River Management Area. Four of the sites were located in areas of potential drawdown associated with the projected withdrawals. The fifth site was located outside the area of projected drawdown and served as a control site. The data collected during this study are presented in this report.

  4. Creating value in health care through big data: opportunities and policy implications.

    PubMed

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  5. Big Computing in Astronomy: Perspectives and Challenges

    NASA Astrophysics Data System (ADS)

    Pankratius, Victor

    2014-06-01

    Hardware progress in recent years has led to astronomical instruments gathering large volumes of data. In radio astronomy for instance, the current generation of antenna arrays produces data at Tbits per second, and forthcoming instruments will expand these rates much further. As instruments are increasingly becoming software-based, astronomers will get more exposed to computer science. This talk therefore outlines key challenges that arise at the intersection of computer science and astronomy and presents perspectives on how both communities can collaborate to overcome these challenges.Major problems are emerging due to increases in data rates that are much larger than in storage and transmission capacity, as well as humans being cognitively overwhelmed when attempting to opportunistically scan through Big Data. As a consequence, the generation of scientific insight will become more dependent on automation and algorithmic instrument control. Intelligent data reduction will have to be considered across the entire acquisition pipeline. In this context, the presentation will outline the enabling role of machine learning and parallel computing.BioVictor Pankratius is a computer scientist who joined MIT Haystack Observatory following his passion for astronomy. He is currently leading efforts to advance astronomy through cutting-edge computer science and parallel computing. Victor is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID) to create an analysis environment using parallel computing in the cloud. He has an extensive track record of research in parallel multicore systems and software engineering, with contributions to auto-tuning, debugging, and empirical experiments studying programmers. Victor has worked with major industry partners such as Intel, Sun Labs, and Oracle. He holds a distinguished doctorate and a Habilitation degree in Computer Science from the University of Karlsruhe. Contact him at pankrat@mit.edu, victorpankratius.com, or Twitter @vpankratius.

  6. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.

  7. Exploiting big data for critical care research.

    PubMed

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  8. Did God create our universe? Theological reflections on the Big Bang, inflation, and quantum cosmologies.

    PubMed

    Russell, R J

    2001-12-01

    The sciences and the humanities, including theology, form an epistemic hierarchy that ensures both constraint and irreducibility. At the same time, theological methodology is analogous to scientific methodology, though with several important differences. This model of interaction between science and theology can be seen illustrated in a consideration of the relation between contemporary cosmology (Big Bang cosmology, cosmic inflation, and quantum cosmology) and Christian systematic and natural theology. In light of developments in cosmology, the question of origins has become theologically less interesting than that of the cosmic evolution of a contingent universe.

  9. Manufacturing and certification of a diffraction corrector for controlling the surface shape of the six-meter main mirror of the Big Azimuthal Telescope of the Russian Academy of Sciences

    NASA Astrophysics Data System (ADS)

    Nasyrov, R. K.; Poleshchuk, A. G.

    2017-09-01

    This paper describes the development and manufacture of diffraction corrector and imitator for the interferometric control of the surface shape of the 6-m main mirror of the Big Azimuthal Telescope of the Russian Academy of Sciences. The effect of errors in manufacture and adjustment on the quality of the measurement wavefront is studied. The corrector is controlled with the use of an off-axis diffraction imitator operating in a reflection mode. The measured error is smaller than 0.0138λ (RMS).

  10. Science to support adaptive habitat management: Overton Bottoms North Unit, Big Muddy National Fish and Wildlife Refuge, Missouri [Volumes 1-6

    USGS Publications Warehouse

    Jacobson, Robert B.

    2006-01-01

    Extensive efforts are underway along the Lower Missouri River to rehabilitate ecosystem functions in the channel and flood plain. Considerable uncertainty inevitably accompanies ecosystem restoration efforts, indicating the benefits of an adaptive management approach in which management actions are treated as experiments, and results provide information to feed back into the management process. The Overton Bottoms North Unit of the Big Muddy National Fish and Wildlife Refuge is a part of the Missouri River Fish and Wildlife Habitat Mitigation Project. The dominant management action at the Overton Bottoms North Unit has been excavation of a side-channel chute to increase hydrologic connectivity and to enhance shallow, slow current-velocity habitat. The side-channel chute also promises to increase hydrologic gradients, and may serve to alter patterns of wetland inundation and vegetation community growth in undesired ways. The U.S. Geological Survey's Central Region Integrated Studies Program (CRISP) undertook interdisciplinary research at the Overton Bottoms North Unit in 2003 to address key areas of scientific uncertainty that were highly relevant to ongoing adaptive management of the site, and to the design of similar rehabilitation projects on the Lower Missouri River. This volume presents chapters documenting the surficial geologic, topographic, surface-water, and ground-water framework of the Overton Bottoms North Unit. Retrospective analysis of vegetation community trends over the last 10 years is used to evaluate vegetation responses to reconnection of the Overton Bottoms North Unit to the river channel. Quasi-experimental analysis of cottonwood growth rate variation along hydrologic gradients is used to evaluate sensitivity of terrestrial vegetation to development of aquatic habitats. The integrated, landscape-specific understanding derived from these studies illustrates the value of scientific information in design and management of rehabilitation projects.

  11. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  12. Entrenched Compartmentalisation and Students' Abilities and Levels of Interest in Science

    ERIC Educational Resources Information Center

    Billingsley, Berry; Nassaji, Mehdi; Abedin, Manzoorul

    2017-01-01

    This article explores the notion that asking and exploring so-called "big questions" could potentially increase the diversity and number of students who aspire to work in science and science-related careers. The focus is the premise that girls are more interested than boys in the relationships between science and other disciplines. The…

  13. The Big Picture: Pre-Service Teachers' Perceptions of "Expert" Science Teachers

    ERIC Educational Resources Information Center

    McKinnon, Merryn; Perara, Sean

    2015-01-01

    This study adapted the Draw-A-Science-Teacher Test to compare 22 pre-service teachers' perceptions of their own strengths as science teachers against their perceived strengths of expert science teachers. The drawings identified a disconnection between theory and practice that we revisit in the literature. Our findings from this pilot study are…

  14. 75 FR 54085 - Divide Ranger District, Rio Grande National Forest; Colorado; Big Moose Vegetation Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... DEPARTMENT OF AGRICULTURE Forest Service Divide Ranger District, Rio Grande National Forest; Colorado; Big Moose Vegetation Management Project AGENCY: Forest Service, Rio Grande National Forest, USDA. ACTION: Corrected Notice of Intent to prepare an environmental impact statement. DATES: The draft...

  15. Neocolonialism and Contested Spiritual Landscapes in Modern American Astronomy

    NASA Astrophysics Data System (ADS)

    Swanner, L.

    2017-12-01

    In the second half of the twentieth century, Native American and Native Hawaiian activists clashed with the American astronomy community over telescope construction on sacred mountains. Multimillion dollar observatory projects planned for the Native Hawaiian sacred peak of Maunakea and the Native American sacred mountains of Kitt Peak and Mt. Graham in Arizona were stalled or abandoned following dramatic protests and legal disputes at each observatory site. Situating these controversies within the history of emerging Native rights movements in the United States, I argue that cultural gaps between pro- and anti-observatory groups are an artifact of what I shall call "neocolonialist science." Neocolonialist science, the domination and exploitation of Native lands by an occupying force for the purpose of practicing science, is also defined by the failure to acknowledge the impact of past and present conquests of Native land and cultural oppression. Despite astronomers' well-meaning attempts to demonstrate cultural sensitivity, the perception of telescopes as instruments of conquest has haunted each new observatory project. While astronomers typically see little connection between colonialism and the pursuit of knowledge, Native activists often see little distinction. Retained in inter-generational memory through oral tradition, the wounds of colonization remain fresh, and construction of telescopes on Native lands is often perceived as the latest attack on culture and sovereignty. These telescope controversies reveal that Big Science is surprisingly vulnerable to grassroots opposition, since religious claims on the mountain summits have severely restricted scientific development. To narrow the ideological divide between scientific and spiritual understandings of land use, I conclude that the future of science on sacred lands critically depends on acknowledging the colonialist past.

  16. The Brazilian Science Data Center (BSDC)

    NASA Astrophysics Data System (ADS)

    de Almeida, Ulisses Barres; Bodmann, Benno; Giommi, Paolo; Brandt, Carlos H.

    Astrophysics and Space Science are becoming increasingly characterised by what is now known as “big data”, the bottlenecks for progress partly shifting from data acquisition to “data mining”. Truth is that the amount and rate of data accumulation in many fields already surpasses the local capabilities for its processing and exploitation, and the efficient conversion of scientific data into knowledge is everywhere a challenge. The result is that, to a large extent, isolated data archives risk being progressively likened to “data graveyards”, where the information stored is not reused for scientific work. Responsible and efficient use of these large data-sets means democratising access and extracting the most science possible from it, which in turn signifies improving data accessibility and integration. Improving data processing capabilities is another important issue specific to researchers and computer scientists of each field. The project presented here wishes to exploit the enormous potential opened up by information technology at our age to advance a model for a science data center in astronomy which aims to expand data accessibility and integration to the largest possible extent and with the greatest efficiency for scientific and educational use. Greater access to data means more people producing and benefiting from information, whereas larger integration of related data from different origins means a greater research potential and increased scientific impact. The project of the BSDC is preoccupied, primarily, with providing tools and solutions for the Brazilian astronomical community. It nevertheless capitalizes on extensive international experience, and is developed in full cooperation with the ASI Science Data Center (ASDC), from the Italian Space Agency, granting it an essential ingredient of internationalisation. The BSDC is Virtual Observatory-complient and part of the “Open Universe”, a global initiative built under the auspices of the United Nations.

  17. Reflections on the Use of Tablet Technology

    ERIC Educational Resources Information Center

    Wise, Nicki; McGregor, Deb; Bird, James

    2015-01-01

    This article describes a recent Oxfordshire Big Science Event (BSE), which was combined with Science Week in Bure Park Primary School and involved a competition in which primary school children throughout Oxfordshire devised, carried out, and recorded data from science investigations to answer questions that interested them. Teams of children…

  18. An Engineering Technology Skills Framework that Reflects Workforce Needs on Maui and the Big Island of Hawai'i

    NASA Astrophysics Data System (ADS)

    Seagroves, S.; Hunter, L.

    2010-12-01

    The Akamai Workforce Initiative (AWI) is an interdisciplinary effort to improve science/engineering education in the state of Hawai'i, and to train a diverse population of local students in the skills needed for a high-tech economy. In 2009, the AWI undertook a survey of industry partners on Maui and the Big Island of Hawai'i to develop an engineering technology skills framework that will guide curriculum development at the U. of Hawai'i - Maui (formerly Maui Community College). This engineering skills framework builds directly on past engineering-education developments within the Center for Adaptive Optics Professional Development Program, and draws on curriculum development frameworks and engineering skills standards from the literature. Coupling that previous work with reviews of past Akamai Internship projects and information from previous conversations with the local high-tech community led to a structured-interview format where engineers and managers could contribute meaningful commentary to this framework. By incorporating these local high-tech companies' needs for entry-level engineers and technicians, a skills framework emerges that is unique and illuminating. Two surprising features arise in this framework: (1) "technician-like" skills of making existing technology work are on similar footing with "engineer-like" skills of creating new technology; in fact, both engineers and technicians at these workplaces use both sets of skills; and (2) project management skills are emphasized by employers even for entry-level positions.

  19. NASA EOSDIS Evolution in the BigData Era

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2015-01-01

    NASA's EOSDIS system faces several challenges in the Big Data Era. Although volumes are large (but not unmanageably so), the variety of different data collections is daunting. That variety also brings with it a large and diverse user community. One key evolution EOSDIS is working toward is to enable more science analysis to be performed close to the data.

  20. What's in a Relationship? An Examination of Social Capital, Race and Class in Mentoring Relationships

    ERIC Educational Resources Information Center

    Gaddis, S. Michael

    2012-01-01

    After 25 years of intense scrutiny, social capital remains an important yet highly debated concept in social science research. This research uses data from youth and mentors in several chapters of Big Brothers/Big Sisters to assess the importance of different mentoring relationship characteristics in creating positive outcomes among youths. The…

  1. Close Encounters of the Best Kind: The Latest Sci-Fi

    ERIC Educational Resources Information Center

    Kunzel, Bonnie

    2008-01-01

    Not only is science fiction alive and well--it's flourishing. From the big screen (howdy, Wall-E) to the big books (like Suzanne Collins's The Hunger Games, which has attracted loads of prepublication praise), 2008 has been a great year for sci-fi. Publishers have released truckloads of new sci-fi titles this year, but what's particularly…

  2. Small Core, Big Network: A Comprehensive Approach to GIS Teaching Practice Based on Digital Three-Dimensional Campus Reconstruction

    ERIC Educational Resources Information Center

    Cheng, Liang; Zhang, Wen; Wang, Jiechen; Li, Manchun; Zhong, Lishan

    2014-01-01

    Geographic information science (GIS) features a wide range of disciplines and has broad applicability. Challenges associated with rapidly developing GIS technology and the currently limited teaching and practice materials hinder universities from cultivating highly skilled GIS graduates. Based on the idea of "small core, big network," a…

  3. 429th Brookhaven Lecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert P. Crease

    2007-10-31

    Robert P. Crease, historian for Brookhaven National Laboratory and Chair of the Philosophy Department at Stony Brook University, presents "How Big Science Came to Long Island: The Birth of Brookhaven Lab," covering the founding of the Laboratory, the key figures involved in starting BNL, and the many problems that had to be overcome in creating and designing its first big machines.

  4. 429th Brookhaven Lecture

    ScienceCinema

    Robert P. Crease

    2017-12-09

    Robert P. Crease, historian for Brookhaven National Laboratory and Chair of the Philosophy Department at Stony Brook University, presents "How Big Science Came to Long Island: The Birth of Brookhaven Lab," covering the founding of the Laboratory, the key figures involved in starting BNL, and the many problems that had to be overcome in creating and designing its first big machines.

  5. Big Data, Big Problems: A Healthcare Perspective.

    PubMed

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  6. Advances in Materials Research: An Internship at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Barrios, Elizabeth A.; Roberson, Luke B.

    2011-01-01

    My time at Kennedy Space Center. was spent immersing myself in research performed in the Materials Science Division of the Engineering Directorate. My Chemical Engineering background provided me the ability to assist in many different projects ranging from tensile testing of composite materials to making tape via an extrusion process. However, I spent the majority of my time on the following three projects: (1) testing three different materials to determine antimicrobial properties; (2) fabricating and analyzing hydrogen sensing tapes that were placed at the launch pad for STS-133 launch; and (3) researching molten regolith electrolysis at KSC to prepare me for my summer internship at MSFC on a closely related topic. This paper aims to explain, in detail, what I have learned about these three main projects. It will explain why this research is happening and what we are currently doing to resolve the issues. This paper will also explain how the hard work and experiences that I have gained as an intern have provided me with the next big step towards my career at NASA.

  7. Visions 2025 and Linkage to NEXT

    NASA Technical Reports Server (NTRS)

    Wiscombe, W.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    This talk will describe the progress to date on creating a science-driven vision for the NASA Earth Science Enterprise (ESE) in the post-2010 period. This effort began in the Fall of 2001 by organizing five science workgroups with representatives from NASA, academia and other agencies: Long-Term Climate, Medium-Term Climate, Extreme Weather, Biosphere & Ecosystems, and Solid Earth, Ice Sheets, & Sea Level. Each workgroup was directed to scope out one Big Question, including not just the science but the observational and modeling requirements, the information system requirements, and the applications and benefits to society. This first set of five Big Questions is now in hand and has been presented to the ESE Director. It includes: water resources, intraseasonal predictability, tropical cyclogenesis, invasive species, and sea level. Each of these topics will be discussed briefly. How this effort fits into the NEXT vision exercise and into Administrator O'Keefe's new vision for NASA will also be discussed.

  8. Development of Distributed Research Center for analysis of regional climatic and environmental changes

    NASA Astrophysics Data System (ADS)

    Gordov, E.; Shiklomanov, A.; Okladnikov, I.; Prusevich, A.; Titov, A.

    2016-11-01

    We present an approach and first results of a collaborative project being carried out by a joint team of researchers from the Institute of Monitoring of Climatic and Ecological Systems, Russia and Earth Systems Research Center UNH, USA. Its main objective is development of a hardware and software platform prototype of a Distributed Research Center (DRC) for monitoring and projecting of regional climatic and environmental changes in the Northern extratropical areas. The DRC should provide the specialists working in climate related sciences and decision-makers with accurate and detailed climatic characteristics for the selected area and reliable and affordable tools for their in-depth statistical analysis and studies of the effects of climate change. Within the framework of the project, new approaches to cloud processing and analysis of large geospatial datasets (big geospatial data) inherent to climate change studies are developed and deployed on technical platforms of both institutions. We discuss here the state of the art in this domain, describe web based information-computational systems developed by the partners, justify the methods chosen to reach the project goal, and briefly list the results obtained so far.

  9. Crowd-Funded Micro-Grants for Genomics and “Big Data”: An Actionable Idea Connecting Small (Artisan) Science, Infrastructure Science, and Citizen Philanthropy

    PubMed Central

    Badr, Kamal F.; Dove, Edward S.; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J.; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N.; Sabra, Ramzi; Sarkissian, Christineh N.; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K.; Kickbusch, Ilona

    2013-01-01

    Abstract Biomedical science in the 21st century is embedded in, and draws from, a digital commons and “Big Data” created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., “the lone genius” or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21st century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists—only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the “bottom one billion”—the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while sharing similar disease burdens, therapeutics, and diagnostic needs. We report the creation of ten Type 2 micro-grants for citizen science and artisan labs to be administered by the nonprofit Data-Enabled Life Sciences Alliance International (DELSA Global, Seattle). Our hope is that these micro-grants will spur novel forms of disruptive innovation and genomics translation by artisan scientists and citizen scholars alike. We conclude with a neglected voice from the global health frontlines, the American University of Iraq in Sulaimani, and suggest that many similar global regions are now poised for micro-grant enabled collective innovation to harness the 21st century digital commons. PMID:23574338

  10. Crowd-funded micro-grants for genomics and "big data": an actionable idea connecting small (artisan) science, infrastructure science, and citizen philanthropy.

    PubMed

    Özdemir, Vural; Badr, Kamal F; Dove, Edward S; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N; Sabra, Ramzi; Sarkissian, Christineh N; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K; Kickbusch, Ilona

    2013-04-01

    Biomedical science in the 21(st) century is embedded in, and draws from, a digital commons and "Big Data" created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., "the lone genius" or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21(st) century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists-only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the "bottom one billion"-the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while sharing similar disease burdens, therapeutics, and diagnostic needs. We report the creation of ten Type 2 micro-grants for citizen science and artisan labs to be administered by the nonprofit Data-Enabled Life Sciences Alliance International (DELSA Global, Seattle). Our hope is that these micro-grants will spur novel forms of disruptive innovation and genomics translation by artisan scientists and citizen scholars alike. We conclude with a neglected voice from the global health frontlines, the American University of Iraq in Sulaimani, and suggest that many similar global regions are now poised for micro-grant enabled collective innovation to harness the 21(st) century digital commons.

  11. Future Sky Surveys: New Discovery Frontiers

    NASA Astrophysics Data System (ADS)

    Tyson, J. Anthony; Borne, Kirk D.

    2012-03-01

    Driven by the availability of new instrumentation, there has been an evolution in astronomical science toward comprehensive investigations of new phenomena. Major advances in our understanding of the Universe over the history of astronomy have often arisen from dramatic improvements in our capability to observe the sky to greater depth, in previously unexplored wavebands, with higher precision, or with improved spatial, spectral, or temporal resolution. Substantial progress in the important scientific problems of the next decade (determining the nature of dark energy and dark matter, studying the evolution of galaxies and the structure of our own Milky Way, opening up the time domain to discover faint variable objects, and mapping both the inner and outer Solar System) can be achieved through the application of advanced data mining methods and machine learning algorithms operating on the numerous large astronomical databases that will be generated from a variety of revolutionary future sky surveys. Over the next decade, astronomy will irrevocably enter the era of big surveys and of really big telescopes. New sky surveys (some of which will produce petabyte-scale data collections) will begin their operations, and one or more very large telescopes (ELTs = Extremely Large Telescopes) will enter the construction phase. These programs and facilities will generate a remarkable wealth of data of high complexity, endowed with enormous scientific knowledge discovery potential. New parameter spaces will be opened, in multiple wavelength domains as well as the time domain, across wide areas of the sky, and down to unprecedented faint source flux limits. The synergies of grand facilities, massive data collections, and advanced machine learning algorithms will come together to enable discoveries within most areas of astronomical science, including Solar System, exo-planets, star formation, stellar populations, stellar death, galaxy assembly, galaxy evolution, quasar evolution, and cosmology. Current and future sky surveys, comprising an alphabet soup of project names (e.g., Pan- STARRS, WISE, Kepler, DES, VST, VISTA, GAIA, EUCLID, SKA, LSST, and WFIRST; some of which are discussed in Chapters 17, 18, and 20),will contribute to the exponential explosion of complex data in astronomy. The scientific goals of these projects are as monumental as the programs themselves. The core scientific output of all of these will be their scientific data collection. Consequently, data mining and machine learning algorithms and specialists will become a common component of future astronomical research with these facilities. This synergistic combination and collaboration among multiple disciplines are essential in order to maximize the scientific discovery potential, the science output, the research efficiency, and the success of these projects.

  12. 76 FR 59394 - Big Eddy-Knight Transmission Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-26

    ...: Bonneville Power Administration (BPA), Department of Energy (DOE). ACTION: Notice of Availability of Record... that BPA has received by increasing BPA's 500-kV transmission capability to move power from the east...-kilovolt (kV) transmission line and ancillary facilities between BPA's existing Big Eddy Substation in The...

  13. CyberConnect: Use the Internet with Big6[R] Skills To Achieve Standards.

    ERIC Educational Resources Information Center

    Murray, Janet

    2003-01-01

    Describes the use of Big6 strategies in guiding student research projects as part of a cooperative program between teachers and the school librarian. Topics include information seeking strategies; evaluating information sources; locating information using search engines; analyzing information sources; and achieving information literacy and…

  14. Publications - STATEMAP Project | Alaska Division of Geological &

    Science.gov Websites

    ., 2008, Surficial-geologic map of the Salcha River-Pogo area, Big Delta Quadrangle, Alaska: Alaska , Engineering - geologic map, Alaska Highway corridor, Delta Junction to Dot Lake, Alaska: Alaska Division of geologic map of the Salcha River-Pogo area, Big Delta Quadrangle, Alaska: Alaska Division of Geological

  15. The Big Splat, or How Our Moon Came to Be

    NASA Astrophysics Data System (ADS)

    MacKenzie, Dana

    2003-03-01

    The first popular book to explain the dramatic theory behind the Moon's genesis This lively science history relates one of the great recent breakthroughs in planetary astronomy-a successful theory of the birth of the Moon. Science journalist Dana Mackenzie traces the evolution of this theory, one little known outside the scientific community: a Mars-sized object collided with Earth some four billion years ago, and the remains of this colossal explosion-the Big Splat-came together to form the Moon. Beginning with notions of the Moon in ancient cosmologies, Mackenzie relates the fascinating history of lunar speculation, moving from Galileo and Kepler to George Darwin (son of Charles) and the Apollo astronauts, whose trips to the lunar surface helped solve one of the most enigmatic mysteries of the night sky: who hung the Moon? Dana Mackenzie (Santa Cruz, CA) is a freelance science journalist. His articles have appeared in such magazines as Science, Discover, American Scientist, The Sciences, and New Scientist.

  16. Take One Boat: from offshore science to onshore art

    NASA Astrophysics Data System (ADS)

    Cotterill, C.

    2017-12-01

    The International Ocean Discovery Program (IODP) is a collaborative programme that works to explore the oceans and the rocks beneath them. Working from shallow to deep waters, and in ice covered to more tropical areas, scientists work together to sample ocean sediments and rocks, and install subsea observatories, in order to investigate our planets dynamic history. The European Consortium for Ocean Research Drilling (ECORD) are one arm of IODP, and the Education and Outreach Task Force are investigating ways of taking education and outreach further - how can we convey the excitement of this program to others and inspire careers in STEM subjects?Cape Farewell are a think / do tank who gather artists, designers, filmmakers and writers to interact with scientists and find ways to address climate change. From creation of internationally touring artworks to films and novels, Cape Farewell continues to educate engage and inspire. For 3 years the author was involved in Cape Farewell not only as a research scientist, but also as a mentor within the educational programme. Over the course of two expeditions, students were invited to design both a science research project and an accompanying arts project that investigated climate change in this fragile environment, replicating the model used for professional scientists and artists. The long term aim of the project was to support peer to peer learning, with students working as youth ambassadors within their schools and communities. With outputs from this style of engagement now including digital artwork exhibitions, a multi-disciplinary arts school, online resources and the initiation of the youth climate change summit, this talk investigates what lessons can be learnt from this dynamic combination of arts and science, to develop a programme that takes just one boat, and makes a big change in how we communicate science. "The art the students have been producing has been inspired by the science they have learnt, what they experienced during the voyage and their own narratives of being in the Arctic. Unlike school, boundaries between subjects have not been important. Their learning was experiential and in many cases the voyage was a life changing experience" Subathra Subramaniam, Choreographer and science teacher

  17. Big Data: Next-Generation Machines for Big Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hack, James J.; Papka, Michael E.

    Addressing the scientific grand challenges identified by the US Department of Energy’s (DOE’s) Office of Science’s programs alone demands a total leadership-class computing capability of 150 to 400 Pflops by the end of this decade. The successors to three of the DOE’s most powerful leadership-class machines are set to arrive in 2017 and 2018—the products of the Collaboration Oak Ridge Argonne Livermore (CORAL) initiative, a national laboratory–industry design/build approach to engineering nextgeneration petascale computers for grand challenge science. These mission-critical machines will enable discoveries in key scientific fields such as energy, biotechnology, nanotechnology, materials science, and high-performance computing, and servemore » as a milestone on the path to deploying exascale computing capabilities.« less

  18. Applying science and mathematics to big data for smarter buildings.

    PubMed

    Lee, Young M; An, Lianjun; Liu, Fei; Horesh, Raya; Chae, Young Tae; Zhang, Rui

    2013-08-01

    Many buildings are now collecting a large amount of data on operations, energy consumption, and activities through systems such as a building management system (BMS), sensors, and meters (e.g., submeters and smart meters). However, the majority of data are not utilized and are thrown away. Science and mathematics can play an important role in utilizing these big data and accurately assessing how energy is consumed in buildings and what can be done to save energy, make buildings energy efficient, and reduce greenhouse gas (GHG) emissions. This paper discusses an analytical tool that has been developed to assist building owners, facility managers, operators, and tenants of buildings in assessing, benchmarking, diagnosing, tracking, forecasting, and simulating energy consumption in building portfolios. © 2013 New York Academy of Sciences.

  19. Focal Plant Observations as a Standardised Method for Pollinator Monitoring: Opportunities and Limitations for Mass Participation Citizen Science

    PubMed Central

    Roy, Helen E.; Baxter, Elizabeth; Saunders, Aoine; Pocock, Michael J. O.

    2016-01-01

    Background Recently there has been increasing focus on monitoring pollinating insects, due to concerns about their declines, and interest in the role of volunteers in monitoring pollinators, particularly bumblebees, via citizen science. Methodology / Principal Findings The Big Bumblebee Discovery was a one-year citizen science project run by a partnership of EDF Energy, the British Science Association and the Centre for Ecology & Hydrology which sought to assess the influence of the landscape at multiple scales on the diversity and abundance of bumblebees. Timed counts of bumblebees (Bombus spp.; identified to six colour groups) visiting focal plants of lavender (Lavendula spp.) were carried out by about 13 000 primary school children (7–11 years old) from over 4000 schools across the UK. 3948 reports were received totalling 26 868 bumblebees. We found that while the wider landscape type had no significant effect on reported bumblebee abundance, the local proximity to flowers had a significant effect (fewer bumblebees where other flowers were reported to be >5m away from the focal plant). However, the rate of mis-identifcation, revealed by photographs uploaded by participants and a photo-based quiz, was high. Conclusions / Significance Our citizen science results support recent research on the importance of local flocal resources on pollinator abundance. Timed counts of insects visiting a lure plant is potentially an effective approach for standardised pollinator monitoring, engaging a large number of participants with a simple protocol. However, the relatively high rate of mis-identifications (compared to reports from previous pollinator citizen science projects) highlights the importance of investing in resources to train volunteers. Also, to be a scientifically valid method for enquiry, citizen science data needs to be sufficiently high quality, so receiving supporting evidence (such as photographs) would allow this to be tested and for records to be verified. PMID:26985824

  20. Citizen Science, Crowdsourcing and Big Data: A Scientific and Social Framework for Natural Resources and Environments

    NASA Astrophysics Data System (ADS)

    Glynn, P. D.; Jones, J. W.; Liu, S. B.; Shapiro, C. D.; Jenter, H. L.; Hogan, D. M.; Govoni, D. L.; Poore, B. S.

    2014-12-01

    We describe a conceptual framework for Citizen Science that can be applied to improve the understanding and management of natural resources and environments. For us, Citizen Science represents an engagement from members of the public, usually volunteers, in collaboration with paid professionals and technical experts to observe and understand natural resources and environments for the benefit of science and society. Our conceptual framework for Citizen Science includes crowdsourcing of observations (or sampling). It considers a wide range of activities, including volunteer and professional monitoring (e.g. weather and climate variables, water availability and quality, phenology, biota, image capture and remote sensing), as well as joint fact finding and analyses, and participatory mapping and modeling. Spatial distribution and temporal dynamics of the biophysical processes that control natural resources and environments are taken into account within this conceptual framework, as are the availability, scaling and diversity of tools and efforts that are needed to properly describe these biophysical processes. Opportunities are sought within the framework to properly describe, QA/QC, archive, and make readily accessible, the large amounts of information and traceable knowledge required to better understand and manage natural resources and environments. The framework also considers human motivational needs, primarily through a modern version of Maslow's hierarchy of needs. We examine several USGS-based Citizen Science efforts within the context of our framework, including the project called "iCoast - Did the Coast Change?", to understand the utility of the framework, its costs and benefits, and to offer concrete examples of how to expand and sustain specific projects. We make some recommendations that could aid its implementation on a national or larger scale. For example, implementation might be facilitated (1) through greater engagement of paid professionals, and (2) through the involvement of integrating entities, including institutions of learning and agencies with broad science responsibilities.

  1. Research on Durability of Big Recycled Aggregate Self-Compacting Concrete Beam

    NASA Astrophysics Data System (ADS)

    Gao, Shuai; Liu, Xuliang; Li, Jing; Li, Juan; Wang, Chang; Zheng, Jinkai

    2018-03-01

    Deflection and crack width are the most important durability indexes, which play a pivotal role in the popularization and application of the Big Recycled Aggregate Self-Compacting Concrete technology. In this research, comparative study on the Big Recycled Aggregate Self-Compacting Concrete Beam and ordinary concrete beam were conducted by measuring the deflection and crack width index. The results show that both kind of concrete beams have almost equal mid-span deflection value and are slightly different in the maximum crack width. It indicates that the Big Recycled Aggregate Self-Compacting Concrete Beam will be a good substitute for ordinary concrete beam in some less critical structure projects.

  2. Review of EuCARD project on accelerator infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-01-01

    The aim of big infrastructural and research programs (like pan-European Framework Programs) and individual projects realized inside these programs in Europe is to structure the European Research Area - ERA in this way as to be competitive with the leaders of the world. One of this projects in EuCARD (European Coordination of Accelerator Research and Development) with the aim to structure and modernize accelerator, (including accelerators for big free electron laser machines) research infrastructure. This article presents the periodic development of EuCARD which took place between the annual meeting, April 2012 in Warsaw and SC meeting in Uppsala, December 2012. The background of all these efforts are achievements of the LHC machine and associated detectors in the race for new physics. The LHC machine works in the regime of p-p, Pb-p, Pb-Pb (protons and lead ions). Recently, a discovery by the LHC of Higgs like boson, has started vivid debates on the further potential of this machine and the future. The periodic EuCARD conference, workshop and meetings concern building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The aim of the discussion is not only summarize the current status but make plans and prepare practically to building new infrastructures. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. Accelerator technology is intensely developed in all developed nations and regions of the world. The EuCARD project contains a lot of subjects related directly and indirectly to photon physics and photonics, as well as optoelectronics, electronics and integration of these with large research infrastructure.

  3. Gender Differences in Achievement in Calculating Reacting Masses from Chemical Equations among Secondary School Students in Makurdi Metropols

    ERIC Educational Resources Information Center

    Eriba, Joel O.; Ande, Sesugh

    2006-01-01

    Over the years there exists gender inequality in science achievement among senior secondary school students the world over. It is observed that the males score higher than the females in science and science- related examinations. This has created a big psychological alienation or depression in the minds of female students towards science and…

  4. Small Bodies, Big Concepts: Bringing Visual Analysis into the Middle School Classroom

    NASA Astrophysics Data System (ADS)

    Cobb, W. H.; Lebofsky, L. A.; Ristvey, J. D.; Buxner, S.; Weeks, S.; Zolensky, M. E.

    2012-03-01

    Multi-disciplinary PD model digs into high-end planetary science backed by a pedagogical framework, Designing Effective Science Instruction. NASA activities are sequenced to promote visual analysis of emerging data from Discovery Program missions.

  5. The Big Science Questions About Mercury's Ice-Bearing Polar Deposits After MESSENGER

    NASA Astrophysics Data System (ADS)

    Chabot, N. L.; Lawrence, D. J.

    2018-05-01

    Mercury’s polar deposits provide many well-characterized locations that are known to have large expanses of exposed water ice and/or other volatile materials — presenting unique opportunities to address fundamental science questions.

  6. Accelerator boom hones China's engineering expertise

    NASA Astrophysics Data System (ADS)

    Normile, Dennis

    2018-02-01

    In raising the curtain on the China Spallation Neutron Source, China has joined just four other nations in having mastered the technology of accelerating and controlling beams of protons. The $277 million facility, set to open to users this spring in Dongguan, is expected to yield big dividends in materials science, chemistry, and biology. More world class machines are on the way, as China this year starts construction on four other major accelerator facilities. The building boom is prompting a scramble to find enough engineers and technicians to finish the projects. But if they all come off as planned, the facilities would position China to tackle the next global megaproject: a giant accelerator that would pick up where Europe's Large Hadron Collider leaves off.

  7. Big Data from Europe's Natural Science Collections through DiSSCo

    NASA Astrophysics Data System (ADS)

    Addink, Wouter; Koureas, Dimitris; Casino, Ana

    2017-04-01

    DiSSCo, a Distributed System of Scientific Collections, will be a Research Infrastructure delivering big data describing the history of Planet Earth. Approximately 1.5 billion biological and geological specimens, representing the last 300 years of scientific study on the natural world, reside in collections all over Europe. These span 4.5 billion years of history, from the formation of the solar system to the present day. In the European landscape of environmental Research Infrastructures, different projects and landmarks describe services that aim at aggregating, monitoring, analysing and modelling geo-diversity information. The effectiveness of these services, however, is based on the quality and availability of primary reference data that today is scattered and uncomplete. DiSSCo provides the required bio-geographical, taxonomic and species trait data at the level of precision and accuracy required to enable and speed up research for the rapidly growing seven grand societal challenges that are priorities of the Europe 2020 strategy. DiSSCo enables better connections between collection data and observations in biodiversity observation networks, such as EU BON and GEOBON. This supports research areas like long term ecological research, for which the continuity and long term research is a strength of biological collections.

  8. Big Data and Machine Learning in Plastic Surgery: A New Frontier in Surgical Innovation.

    PubMed

    Kanevsky, Jonathan; Corban, Jason; Gaster, Richard; Kanevsky, Ari; Lin, Samuel; Gilardino, Mirko

    2016-05-01

    Medical decision-making is increasingly based on quantifiable data. From the moment patients come into contact with the health care system, their entire medical history is recorded electronically. Whether a patient is in the operating room or on the hospital ward, technological advancement has facilitated the expedient and reliable measurement of clinically relevant health metrics, all in an effort to guide care and ensure the best possible clinical outcomes. However, as the volume and complexity of biomedical data grow, it becomes challenging to effectively process "big data" using conventional techniques. Physicians and scientists must be prepared to look beyond classic methods of data processing to extract clinically relevant information. The purpose of this article is to introduce the modern plastic surgeon to machine learning and computational interpretation of large data sets. What is machine learning? Machine learning, a subfield of artificial intelligence, can address clinically relevant problems in several domains of plastic surgery, including burn surgery; microsurgery; and craniofacial, peripheral nerve, and aesthetic surgery. This article provides a brief introduction to current research and suggests future projects that will allow plastic surgeons to explore this new frontier of surgical science.

  9. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    PubMed Central

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639

  10. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    PubMed

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  11. IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research.

    PubMed

    Chen, Ying; Elenee Argentinis, J D; Weber, Griff

    2016-04-01

    Life sciences researchers are under pressure to innovate faster than ever. Big data offer the promise of unlocking novel insights and accelerating breakthroughs. Ironically, although more data are available than ever, only a fraction is being integrated, understood, and analyzed. The challenge lies in harnessing volumes of data, integrating the data from hundreds of sources, and understanding their various formats. New technologies such as cognitive computing offer promise for addressing this challenge because cognitive solutions are specifically designed to integrate and analyze big datasets. Cognitive solutions can understand different types of data such as lab values in a structured database or the text of a scientific publication. Cognitive solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modeling, and machine learning techniques to advance research faster. Watson, a cognitive computing technology, has been configured to support life sciences research. This version of Watson includes medical literature, patents, genomics, and chemical and pharmacological data that researchers would typically use in their work. Watson has also been developed with specific comprehension of scientific terminology so it can make novel connections in millions of pages of text. Watson has been applied to a few pilot studies in the areas of drug target identification and drug repurposing. The pilot results suggest that Watson can accelerate identification of novel drug candidates and novel drug targets by harnessing the potential of big data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. 'Big data' in mental health research: current status and emerging possibilities.

    PubMed

    Stewart, Robert; Davis, Katrina

    2016-08-01

    'Big data' are accumulating in a multitude of domains and offer novel opportunities for research. The role of these resources in mental health investigations remains relatively unexplored, although a number of datasets are in use and supporting a range of projects. We sought to review big data resources and their use in mental health research to characterise applications to date and consider directions for innovation in future. A narrative review. Clear disparities were evident in geographic regions covered and in the disorders and interventions receiving most attention. We discuss the strengths and weaknesses of the use of different types of data and the challenges of big data in general. Current research output from big data is still predominantly determined by the information and resources available and there is a need to reverse the situation so that big data platforms are more driven by the needs of clinical services and service users.

  13. Examining the Dynamics of Managing Information Systems Development Projects: A Control Loss Perspective

    ERIC Educational Resources Information Center

    Narayanaswamy, Ravi

    2009-01-01

    The failure rate of information systems development (ISD) projects continues to pose a big challenge for organizations. The success rate of ISD projects is less then forty percent. Factors such as disagreements and miscommunications among project manager and team members, poor monitoring and intermediary problems contribute to project failure.…

  14. The Axion Dark Matter Experiment: Big Science with a (relatively) Small Team

    NASA Astrophysics Data System (ADS)

    Carosi, Gianpaolo

    2016-03-01

    The idea of the solitary physicist tinkering alone in a lab was my image of how science was done growing up (mostly influenced by popular culture). Of course this is not generally how experimental physics is done now days with examples of experiments at the LHC now involving thousands of scientists. In this talk I will describe my experience in a relatively modest project, the Axion Dark Matter eXperiment (ADMX), which involves only a few dozen scientists at various universities and national labs. I will outline ADMX's humble beginnings at Lawrence Livermore National Laboratory (LLNL), where it began in the mid-1990s, and describe how the collaboration has evolved and grown throughout the years, as we pursue our elusive quarry: the dark-matter axion. Supported by DOE Grants DE-FG02-97ER41029, DE-FG02-96ER40956, DE- AC52-07NA27344, DE-AC03-76SF00098, and the Livermore LDRD program.

  15. First a tragedy, then farce

    NASA Astrophysics Data System (ADS)

    Foster, Brian

    2008-09-01

    It is impossible to think about the problems in the UK over the last 10 months arising from the £80m shortfall in the budget of the Science and Technology Facilities Council (STFC) without recalling Marx's famous aphorism: "History repeats itself, first as tragedy, then as farce." Certainly the repetition of a funding crisis in UK particle physics and astronomy is hardly unexpected; they seem to occur every decade or so with unwelcome regularity. The consequent loss of morale, jobs and opportunities in the UK for the brightest young people to pursue their dreams in what is widely acknowledged to be world-class science is a tragedy. What perhaps marks the uniqueness of the funding crisis this time round is the level of farce. The sums that did not add up; the consultations without interlocutors; and the truculent and damaging statements about withdrawal from the Gemini telescopes based in Hawaii and Chile, and the International Linear Collider (ILC) - the next big particle-physics project after the Large Hadron Collider (LHC) at CERN.

  16. Holographic storage of three-dimensional image and data using photopolymer and polymer dispersed liquid crystal films

    NASA Astrophysics Data System (ADS)

    Gao, Hong-Yue; Liu, Pan; Zeng, Chao; Yao, Qiu-Xiang; Zheng, Zhiqiang; Liu, Jicheng; Zheng, Huadong; Yu, Ying-Jie; Zeng, Zhen-Xiang; Sun, Tao

    2016-09-01

    We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474194, 11004037, and 61101176) and the Natural Science Foundation of Shanghai, China (Grant No. 14ZR1415500).

  17. Mapping our genes: The genome projects: How big, how fast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    For the past 2 years, scientific and technical journals in biology and medicine have extensively covered a debate about whether and how to determine the function and order of human genes on human chromosomes and when to determine the sequence of molecular building blocks that comprise DNA in those chromosomes. In 1987, these issues rose to become part of the public agenda. The debate involves science, technology, and politics. Congress is responsible for /open quotes/writing the rules/close quotes/ of what various federal agencies do and for funding their work. This report surveys the points made so far in the debate,more » focusing on those that most directly influence the policy options facing the US Congress. Congressional interest focused on how to assess the rationales for conducting human genome projects, how to fund human genome projects (at what level and through which mechanisms), how to coordinate the scientific and technical programs of the several federal agencies and private interests already supporting various genome projects, and how to strike a balance regarding the impact of genome projects on international scientific cooperation and international economic competition in biotechnology. OTA prepared this report with the assistance of several hundred experts throughout the world. 342 refs., 26 figs., 11 tabs.« less

  18. What’s So Different about Big Data?. A Primer for Clinicians Trained to Think Epidemiologically

    PubMed Central

    Liu, Vincent

    2014-01-01

    The Big Data movement in computer science has brought dramatic changes in what counts as data, how those data are analyzed, and what can be done with those data. Although increasingly pervasive in the business world, it has only recently begun to influence clinical research and practice. As Big Data draws from different intellectual traditions than clinical epidemiology, the ideas may be less familiar to practicing clinicians. There is an increasing role of Big Data in health care, and it has tremendous potential. This Demystifying Data Seminar identifies four main strands in Big Data relevant to health care. The first is the inclusion of many new kinds of data elements into clinical research and operations, in a volume not previously routinely used. Second, Big Data asks different kinds of questions of data and emphasizes the usefulness of analyses that are explicitly associational but not causal. Third, Big Data brings new analytic approaches to bear on these questions. And fourth, Big Data embodies a new set of aspirations for a breaking down of distinctions between research data and operational data and their merging into a continuously learning health system. PMID:25102315

  19. What's so different about big data?. A primer for clinicians trained to think epidemiologically.

    PubMed

    Iwashyna, Theodore J; Liu, Vincent

    2014-09-01

    The Big Data movement in computer science has brought dramatic changes in what counts as data, how those data are analyzed, and what can be done with those data. Although increasingly pervasive in the business world, it has only recently begun to influence clinical research and practice. As Big Data draws from different intellectual traditions than clinical epidemiology, the ideas may be less familiar to practicing clinicians. There is an increasing role of Big Data in health care, and it has tremendous potential. This Demystifying Data Seminar identifies four main strands in Big Data relevant to health care. The first is the inclusion of many new kinds of data elements into clinical research and operations, in a volume not previously routinely used. Second, Big Data asks different kinds of questions of data and emphasizes the usefulness of analyses that are explicitly associational but not causal. Third, Big Data brings new analytic approaches to bear on these questions. And fourth, Big Data embodies a new set of aspirations for a breaking down of distinctions between research data and operational data and their merging into a continuously learning health system.

  20. Big questions, big science: meeting the challenges of global ecology

    Treesearch

    David Schimel; Michael Keller

    2015-01-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator’s or s group of investigator’s labs, sustained for longer...

  1. Examining Big-Fish-Little-Pond-Effects across 49 Countries: A Multilevel Latent Variable Modelling Approach

    ERIC Educational Resources Information Center

    Wang, Ze

    2015-01-01

    Using data from the Trends in International Mathematics and Science Study (TIMSS) 2007, this study examined the big-fish-little-pond-effects (BFLPEs) in 49 countries. In this study, the effect of math ability on math self-concept was decomposed into a within- and a between-level components using implicit mean centring and the complex data…

  2. The Next Big Thing - Eric Haseltine

    ScienceCinema

    Eric Haseltine

    2017-12-09

    Eric Haseltine, Haseltine Partners president and former chief of Walt Disney Imagineering, presented "The Next Big Thing," on Sept. 11, at the ORNL. He described the four "early warning signs" that a scientific breakthrough is imminent, and then suggested practical ways to turn these insights into breakthrough innovations. Haseltine is former director of research at the National Security Agency and associate director for science and technology for the director of National Intelligence, former executive vice president of Walt Disney Imagineering and director of engineering for Hughes Aircraft. He has 15 patents in optics, special effects and electronic media, and more than 100 publications in science and technical journals, the web and Discover Magazine.

  3. Big data in biomedicine.

    PubMed

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Joint Antarctic School Expedition - An International Collaboration for High School Students and Teachers on Antarctic Science

    NASA Astrophysics Data System (ADS)

    Botella, J.; Warburton, J.; Bartholow, S.; Reed, L. F.

    2014-12-01

    The Joint Antarctic School Expedition (JASE) is an international collaboration program between high school students and teachers from the United States and Chile aimed at providing the skills required for establishing the scientific international collaborations that our globalized world demands, and to develop a new approach for science education. The National Antarctic Programs of Chile and the United States worked together on a pilot program that brought high school students and teachers from both countries to Punta Arenas, Chile, in February 2014. The goals of this project included strengthening the partnership between the two countries, and building relationships between future generations of scientists, while developing the students' awareness of global scientific issues and expanding their knowledge and interest in Antarctica and polar science. A big component of the project involved the sharing by students of the acquired knowledge and experiences with the general public. JASE is based on the successful Chilean Antarctic Science Fair developed by Chile´s Antarctic Research Institute. For 10 years, small groups of Chilean students, each mentored by a teacher, perform experimental or bibliographical Antarctic research. Winning teams are awarded an expedition to the Chilean research station on King George Island. In 2014, the Chileans invited US participation in this program in order to strengthen science ties for upcoming generations. On King George Island, students have hands-on experiences conducting experiments and learning about field research. While the total number of students directly involved in the program is relatively small, the sharing of the experience by students with the general public is a novel approach to science education. Research experiences for students, like JASE, are important as they influence new direction for students in science learning, science interest, and help increase science knowledge. We will share experiences with the planning of the pilot program as well as the expedition itself. We also share the results of the assessment report prepared by an independent party. Lastly, we will offer recommendations for initiating international science education collaborations. United States participation was funded by the NSF Division of Polar Programs.

  5. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  6. 75 FR 19997 - Endangered and Threatened Wildlife and Plants; Permit Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... endangered species in the Code of Federal Regulations (CFR) at 50 CFR part 17. Submit your written data... release) Indiana bats, gray bats, Virginia big-eared bats (Corynorhinus townsendii virginianus), Ozark big... temporarily relocate endangered Topeka shiners to protect them from impacts due to in-stream projects such as...

  7. Michael Eisenberg and Robert Berkowitz's Big6[TM] Information Problem-Solving Model.

    ERIC Educational Resources Information Center

    Carey, James O.

    2003-01-01

    Reviews the Big6 information problem-solving model. Highlights include benefits and dangers of the simplicity of the model; theories of instruction; testing of the model; the model as a process for completing research projects; and advice for school library media specialists considering use of the model. (LRW)

  8. A New MI-Based Visualization Aided Validation Index for Mining Big Longitudinal Web Trial Data

    PubMed Central

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-01-01

    Web-delivered clinical trials generate big complex data. To help untangle the heterogeneity of treatment effects, unsupervised learning methods have been widely applied. However, identifying valid patterns is a priority but challenging issue for these methods. This paper, built upon our previous research on multiple imputation (MI)-based fuzzy clustering and validation, proposes a new MI-based Visualization-aided validation index (MIVOOS) to determine the optimal number of clusters for big incomplete longitudinal Web-trial data with inflated zeros. Different from a recently developed fuzzy clustering validation index, MIVOOS uses a more suitable overlap and separation measures for Web-trial data but does not depend on the choice of fuzzifiers as the widely used Xie and Beni (XB) index. Through optimizing the view angles of 3-D projections using Sammon mapping, the optimal 2-D projection-guided MIVOOS is obtained to better visualize and verify the patterns in conjunction with trajectory patterns. Compared with XB and VOS, our newly proposed MIVOOS shows its robustness in validating big Web-trial data under different missing data mechanisms using real and simulated Web-trial data. PMID:27482473

  9. From the Big Bang to the Nobel Prize and on to James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Mather, John C.

    2008-01-01

    The Big Bang 13.7 billion years ago started the expansion of our piece of the universe, and portions of it stopped expanding and made stars, galaxies, planets, and people. I summarize the history of the universe, and explain how humans have learned about its size, its expansion, and its constituents. The COBE (Cosmic Background Explorer) mission measured the remnant heat radiation from the Big Bang, showed that its color (spectrum) matches the predictions perfectly, and discovered hot and cold spots in the radiation that reveal the primordial density variations that enabled us to exist. My current project, the James Webb Space Telescope (JWST), is the planned successor to the Hubble Space Telescope, and will extend its scientific discoveries to ever greater distances and ever closer to the Big Bang itself. Its infrared capabilities enable it to see inside dust clouds to study the formation of stars and planets, and it may reveal the atmospheric properties of planets around other stars. Planned for launch in 2013, it is an international project led by NASA along with the European and Canadian Space Agencies.

  10. Economics and econophysics in the era of Big Data

    NASA Astrophysics Data System (ADS)

    Cheong, Siew Ann

    2016-12-01

    There is an undeniable disconnect between theory-heavy economics and the real world, and some cross polination of ideas with econophysics, which is more balanced between data and models, might help economics along the way to become a truly scientific enterprise. With the coming of the era of Big Data, this transformation of economics into a data-driven science is becoming more urgent. In this article, I use the story of Kepler's discovery of his three laws of planetary motion to enlarge the framework of the scientific approach, from one that focuses on experimental sciences, to one that accommodates observational sciences, and further to one that embraces data mining and machine learning. I distinguish between the ontological values of Kepler's Laws vis-a-vis Newton's Laws, and argue that the latter is more fundamental because it is able to explain the former. I then argue that the fundamental laws of economics lie not in mathematical equations, but in models of adaptive economic agents. With this shift in mind set, it becomes possible to think about how interactions between agents can lead to the emergence of multiple stable states and critical transitions, and complex adaptive policies and regulations that might actually work in the real world. Finally, I discuss how Big Data, exploratory agent-based modeling, and predictive agent-based modeling can come together in a unified framework to make economics a true science.

  11. An unbalanced portfolio.

    PubMed

    Federsel, Hans-Jurgen

    2009-06-01

    An excellent demonstration of how meaningful and valuable conferences devoted to the topic of project and portfolio management in the pharmaceutical industry can be, was given at an event organized in Barcelona, September 2008. Thus, over this 2-day meeting the delegates were updated on the state of the art in this wide-reaching area from speakers representing an array of companies; from small, relatively new players, via mid-sized, to established large and big pharmas. One common theme that emerged was the importance of assessing the value of drug projects as correctly as possible, especially under the current financial climate and the many challenges facing the industry. Furthermore, experiences from constructing portfolios with the aim to minimize risk and maximize return on investment were shared alongside mathematical approaches to obtain the data required for this purpose and accounts of the pleasures and hardships working in a global context and in partnership constellations. Copyright 2009 Prous Science, S.A.U. or its licensors. All rights reserved.

  12. Measuring the free neutron lifetime to <= 0.3s via the beam method

    NASA Astrophysics Data System (ADS)

    Fomin, Nadia; Mulholland, Jonathan

    2015-04-01

    Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is needed to predict the primordial 4 He abundance from the theory of Big Bang Nucleosynthesis. An effort has begun for an in-beam measurement of the neutron lifetime with an projected <=0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed. This work is supported by the DOE office of Science, NIST and NSF.

  13. SLGRID: spectral synthesis software in the grid

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez, S.; Verdes-Montenegro, L.

    2011-11-01

    SLGRID (http://www.e-ciencia.es/wiki/index.php/Slgrid) is a pilot project proposed by the e-Science Initiative of Andalusia (eCA) and supported by the Spanish e-Science Network in the frame of the European Grid Initiative (EGI). The aim of the project was to adapt the spectral synthesis software Starlight (Cid-Fernandes et al. 2005) to the Grid infrastructure. Starlight is used to estimate the underlying stellar populations (their ages and metallicities) using an optical spectrum, hence, it is possible to obtain a clean nebular spectrum that can be used for the diagnostic of the presence of an Active Galactic Nucleus (Sabater et al. 2008, 2009). The typical serial execution of the code for big samples of galaxies made it ideal to be integrated into the Grid. We obtain an improvement on the computational time of order N, being N the number of nodes available in the Grid. In a real case we obtained our results in 3 hours with SLGRID instead of the 60 days spent using Starlight in a PC. The code has already been ported to the Grid. The first tests were made within the e-CA infrastrusture and, later, itwas tested and improved with the colaboration of the CETA-CIEMAT. The SLGRID project has been recently renewed. In a future it is planned to adapt the code for the reduction of data from Integral Field Units where each dataset is composed of hundreds of spectra. Electronic version of the poster at http://www.iaa.es/~jsm/SEA2010

  14. Potentiality of Big Data in the Medical Sector: Focus on How to Reshape the Healthcare System

    PubMed Central

    Jee, Kyoungyoung

    2013-01-01

    Objectives The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. Methods By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Results Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. Conclusions First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security. PMID:23882412

  15. Potentiality of big data in the medical sector: focus on how to reshape the healthcare system.

    PubMed

    Jee, Kyoungyoung; Kim, Gang-Hoon

    2013-06-01

    The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security.

  16. 25th Birthday Cern- Amphi

    ScienceCinema

    None

    2017-12-09

    Cérémonie du 25ème anniversaire du Cern avec 2 orateurs: le Prof.Weisskopf parle de la signification et le rôle du Cern et le Prof.Casimir(?) fait un exposé sur les rélations entre la science pure et la science appliquée et la "big science" (science légère)

  17. Using a Very Big Rocket to take Very Small Satellites to Very Far Places

    NASA Technical Reports Server (NTRS)

    Cohen, Barbara

    2017-01-01

    Planetary science cubesats are being built. Insight (2018) will carry 2 cubesats to provide communication links to Mars. EM-1 (2019) will carry 13 cubesat-class missions to further smallsat science and exploration capabilities. Planetary science cubesats have more in common with large planetary science missions than LEO cubesats- need to work closely with people who have deep-space mission experience

  18. The Confluence of GIS, Cloud and Open Source, Enabling Big Raster Data Applications

    NASA Astrophysics Data System (ADS)

    Plesea, L.; Emmart, C. B.; Boller, R. A.; Becker, P.; Baynes, K.

    2016-12-01

    The rapid evolution of available cloud services is profoundly changing the way applications are being developed and used. Massive object stores, service scalability, continuous integration are some of the most important cloud technology advances that directly influence science applications and GIS. At the same time, more and more scientists are using GIS platforms in their day to day research. Yet with new opportunities there are always some challenges. Given the large amount of data commonly required in science applications, usually large raster datasets, connectivity is one of the biggest problems. Connectivity has two aspects, one being the limited bandwidth and latency of the communication link due to the geographical location of the resources, the other one being the interoperability and intrinsic efficiency of the interface protocol used to connect. NASA and Esri are actively helping each other and collaborating on a few open source projects, aiming to provide some of the core technology components to directly address the GIS enabled data connectivity problems. Last year Esri contributed LERC, a very fast and efficient compression algorithm to the GDAL/MRF format, which itself is a NASA/Esri collaboration project. The MRF raster format has some cloud aware features that make it possible to build high performance web services on cloud platforms, as some of the Esri projects demonstrate. Currently, another NASA open source project, the high performance OnEarth WMTS server is being refactored and enhanced to better integrate with MRF, GDAL and Esri software. Taken together, the GDAL, MRF and OnEarth form the core of an open source CloudGIS toolkit that is already showing results. Since it is well integrated with GDAL, which is the most common interoperability component of GIS applications, this approach should improve the connectivity and performance of many science and GIS applications in the cloud.

  19. International Cooperation in Science. Science Policy Study--Hearings Volume 7. Hearings before the Task Force on Science Policy of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, First Session (June 18, 19, 20, 27, 1985). No. 50.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science and Technology.

    These hearings on international cooperation in science focused on three issues: (1) international cooperation in big science; (2) the impact of international cooperation on research priorities; and (3) coordination in management of international cooperative research. Witnesses presenting testimony and/or prepared statements were: Victor Weisskopf;…

  20. 50 CFR 86.54 - How must I submit proposals?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM BOATING INFRASTRUCTURE GRANT (BIG... from Tier Two projects. (e) You must describe each project in Tier Two separately, so that the Service...

  1. Towards Supporting Climate Scientists and Impact Assessment Analysts with the Big Data Europe Platform

    NASA Astrophysics Data System (ADS)

    Klampanos, Iraklis; Vlachogiannis, Diamando; Andronopoulos, Spyros; Cofiño, Antonio; Charalambidis, Angelos; Lokers, Rob; Konstantopoulos, Stasinos; Karkaletsis, Vangelis

    2016-04-01

    The EU, Horizon 2020, project Big Data Europe (BDE) aims to support European companies and institutions in effectively managing and making use of big data in activities critical to their progress and success. BDE focuses on seven areas of societal impact: Health, Food and Agriculture, Energy, Transport, Climate, Social Sciences and Security. By reaching out to partners and stakeholders, BDE aims to elicit data-intensive requirements for, and deliver an ICT platform to cover aspects of publishing and consuming semantically interoperable, large-scale, multi-lingual data assets and knowledge. In this presentation we will describe the first BDE pilot for Climate, focusing on SemaGrow, its core component, which provides data querying and management based on data semantics. Over the last few decades, extended scientific effort in understanding climate change has resulted in a huge volume of model and observational data. Large international global and regional model inter-comparison projects have focused on creating a framework in support of climate model diagnosis, validation, documentation and data access. The application of climate model ensembles, a system consisting of different possible realisations of a climate model, has further significantly increased the amount of climate and weather data generated. The provision of such models satisfies the crucial objective of assessing potential impacts of climate change on well-being for adaptation, prevention and mitigation. One of the methodologies applied by the climate research and impact assessment communities is that of dynamical downscaling. This calculates values of atmospheric variables in smaller spatial and temporal scales, given a global model. On the company or institution level, this process can be greatly improved in terms of querying, data ingestion from various sources and formats, automatic data mapping, etc. The first Climate BDE pilot will facilitate the process of dynamical downscaling by providing a semantics-based interface to climate open data, eg{} to ESGF services, searching, downloading and indexing climate model and observational data, according to user requirements, such as coverage and experimental scenarios, executing dynamical downscaling models on institutional computing resources, and establishing a framework for metadata mappings and data lineage. The objectives of this pilot will be met building on the SemaGrow system and tools, which have been developed as part of the SemaGrow project in order to scale data intensive techniques up to extremely large data volumes and improve real time performance for agricultural experiments and analyses. SemaGrow is a query resolution and ingestion system for data and semantics. It is able to extract semantic features from data, index them and expose APIs to other BDE platform components. Moreover, SemaGrow provides tools for transforming and managing data in various formats (e.g. NetCDF), and their metadata. It can also interface between users and distributed, external data sources via SPARQL endpoints. This has been demonstrated as part of the SemaGrow project, on diverse and large-scale scientific use-cases. SemaGrow is an active data service in agINFRA, a data infrastructure for agriculture. https://github.com/semagrow/semagrow Big Data Europe (http://www.big-data-europe.eu) - grant agreement no.644564. Earth System Grid Federation: http://esgf.llnl.gov http://www.semagrow.eu http://aginfra.eu

  2. Development of a Carbon Sequestration Visualization Tool using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Keating, G. N.; Greene, M. K.

    2008-12-01

    The Big Sky Carbon Sequestration Partnership seeks to prepare organizations throughout the western United States for a possible carbon-constrained economy. Through the development of CO2 capture and subsurface sequestration technology, the Partnership is working to enable the region to cleanly utilize its abundant fossil energy resources. The intent of the Los Alamos National Laboratory Big Sky Visualization tool is to allow geochemists, geologists, geophysicists, project managers, and other project members to view, identify, and query the data collected from CO2 injection tests using a single data source platform, a mission to which Google Earth Pro is uniquely and ideally suited . The visualization framework enables fusion of data from disparate sources and allows investigators to fully explore spatial and temporal trends in CO2 fate and transport within a reservoir. 3-D subsurface wells are projected above ground in Google Earth as the KML anchor points for the presentation of various surface subsurface data. This solution is the most integrative and cost-effective possible for the variety of users in the Big Sky community.

  3. How can we improve Science, Technology, Engineering, and Math education to encourage careers in Biomedical and Pathology Informatics?

    PubMed

    Uppal, Rahul; Mandava, Gunasheil; Romagnoli, Katrina M; King, Andrew J; Draper, Amie J; Handen, Adam L; Fisher, Arielle M; Becich, Michael J; Dutta-Moscato, Joyeeta

    2016-01-01

    The Computer Science, Biology, and Biomedical Informatics (CoSBBI) program was initiated in 2011 to expose the critical role of informatics in biomedicine to talented high school students.[1] By involving them in Science, Technology, Engineering, and Math (STEM) training at the high school level and providing mentorship and research opportunities throughout the formative years of their education, CoSBBI creates a research infrastructure designed to develop young informaticians. Our central premise is that the trajectory necessary to be an expert in the emerging fields of biomedical informatics and pathology informatics requires accelerated learning at an early age.In our 4(th) year of CoSBBI as a part of the University of Pittsburgh Cancer Institute (UPCI) Academy (http://www.upci.upmc.edu/summeracademy/), and our 2nd year of CoSBBI as an independent informatics-based academy, we enhanced our classroom curriculum, added hands-on computer science instruction, and expanded research projects to include clinical informatics. We also conducted a qualitative evaluation of the program to identify areas that need improvement in order to achieve our goal of creating a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics in the era of big data and personalized medicine.

  4. An open science cloud for scientific research

    NASA Astrophysics Data System (ADS)

    Jones, Bob

    2016-04-01

    The Helix Nebula initiative was presented at EGU 2013 (http://meetingorganizer.copernicus.org/EGU2013/EGU2013-1510-2.pdf) and has continued to expand with more research organisations, providers and services. The hybrid cloud model deployed by Helix Nebula has grown to become a viable approach for provisioning ICT services for research communities from both public and commercial service providers (http://dx.doi.org/10.5281/zenodo.16001). The relevance of this approach for all those communities facing societal challenges in explained in a recent EIROforum publication (http://dx.doi.org/10.5281/zenodo.34264). This presentation will describe how this model brings together a range of stakeholders to implement a common platform for data intensive services that builds upon existing public funded e-infrastructures and commercial cloud services to promote open science. It explores the essential characteristics of a European Open Science Cloud if it is to address the big data needs of the latest generation of Research Infrastructures. The high-level architecture and key services as well as the role of standards is described. A governance and financial model together with the roles of the stakeholders, including commercial service providers and downstream business sectors, that will ensure a European Open Science Cloud can innovate, grow and be sustained beyond the current project cycles is described.

  5. Big Fish in Little Ponds Aspire More: Mediation and Cross-Cultural Generalizability of School-Average Ability Effects on Self-Concept and Career Aspirations in Science

    ERIC Educational Resources Information Center

    Nagengast, Benjamin; Marsh, Herbert W.

    2012-01-01

    Being schooled with other high-achieving peers has a detrimental influence on students' self-perceptions: School-average and class-average achievement have a negative effect on academic self-concept and career aspirations--the big-fish-little-pond effect. Individual achievement, on the other hand, predicts academic self-concept and career…

  6. On Enthusing Students about Big Data and Social Media Visualization and Analysis Using R, RStudio, and RMarkdown

    ERIC Educational Resources Information Center

    Stander, Julian; Dalla Valle, Luciana

    2017-01-01

    We discuss the learning goals, content, and delivery of a University of Plymouth intensive module delivered over four weeks entitled MATH1608PP Understanding Big Data from Social Networks, aimed at introducing students to a broad range of techniques used in modern Data Science. This module made use of R, accessed through RStudio, and some popular…

  7. Rethinking climate change adaptation and place through a situated pathways framework: A case study from the Big Hole Valley, USA

    Treesearch

    Daniel J. Murphy; Laurie Yung; Carina Wyborn; Daniel R. Williams

    2017-01-01

    This paper critically examines the temporal and spatial dynamics of adaptation in climate change science and explores how dynamic notions of 'place' elucidate novel ways of understanding community vulnerability and adaptation. Using data gathered from a narrative scenario-building process carried out among communities of the Big Hole Valley in Montana, the...

  8. Enabling Analytics in the Cloud for Earth Science Data

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Lynnes, Christopher; Bingham, Andrew W.; Quam, Brandi M.

    2018-01-01

    The purpose of this workshop was to hold interactive discussions where providers, users, and other stakeholders could explore the convergence of three main elements in the rapidly developing world of technology: Big Data, Cloud Computing, and Analytics, [for earth science data].

  9. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  10. Exponential Growth and the Shifting Global Center of Gravity of Science Production, 1900-2011

    ERIC Educational Resources Information Center

    Zhang, Liang; Powell, Justin J. W.; Baker, David P.

    2015-01-01

    Long historical trends in scientific discovery led mid-20th century scientometricians to mark the advent of "big science"--extensive science production--and predicted that over the next few decades, the exponential growth would slow, resulting in lower rates of increase in production at the upper limit of a logistic curve. They were…

  11. Don't Dumb Me down

    ERIC Educational Resources Information Center

    Goldacre, Ben

    2007-01-01

    In this article, the author talks about pseudoscientific quack, or a big science story in a national newspaper and explains why science in the media is so often pointless, simplistic, boring, or just plain wrong. It is the author's hypothesis that in their choice of stories, and the way they cover them, the media create a parody of science, for…

  12. Thinking, Doing, Talking Science: Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Hanley, Pam; Slavin, Robert; Elliott, Louise

    2015-01-01

    Thinking, Doing, Talking Science (TDTS) is a programme that aims to make science lessons in primary schools more practical, creative and challenging. Teachers are trained in a repertoire of strategies that aim to encourage pupils to use higher order thinking skills. For example, pupils are posed 'Big Questions,' such as 'How do you know that the…

  13. Differences in the Socio-Emotional Competency Profile in University Students from different Disciplinary Area

    ERIC Educational Resources Information Center

    Castejon, Juan Luis; Cantero, Ma. Pilar; Perez, Nelida

    2008-01-01

    Introduction: The main objective of this paper is to establish a profile of socio-emotional competencies characteristic of a sample of students from each of the big academic areas in higher education: legal sciences, social sciences, education, humanities, science and technology, and health. An additional objective was to analyse differences…

  14. Proportional Reasoning Ability and Concepts of Scale: Surface Area to Volume Relationships in Science

    ERIC Educational Resources Information Center

    Taylor, Amy; Jones, Gail

    2009-01-01

    The "National Science Education Standards" emphasise teaching unifying concepts and processes such as basic functions of living organisms, the living environment, and scale. Scale influences science processes and phenomena across the domains. One of the big ideas of scale is that of surface area to volume. This study explored whether or not there…

  15. Review of the National Research Council's Framework for K-12 Science Education

    ERIC Educational Resources Information Center

    Gross, Paul R.

    2011-01-01

    The new "Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas" is a big, comprehensive volume, carefully organized and heavily documented. It is the long-awaited product of the Committee on a Conceptual Framework for New K-12 Science Education Standards. As noted, it is a weighty document (more than 300…

  16. A primer on theory-driven web scraping: Automatic extraction of big data from the Internet for use in psychological research.

    PubMed

    Landers, Richard N; Brusso, Robert C; Cavanaugh, Katelyn J; Collmus, Andrew B

    2016-12-01

    The term big data encompasses a wide range of approaches of collecting and analyzing data in ways that were not possible before the era of modern personal computing. One approach to big data of great potential to psychologists is web scraping, which involves the automated collection of information from webpages. Although web scraping can create massive big datasets with tens of thousands of variables, it can also be used to create modestly sized, more manageable datasets with tens of variables but hundreds of thousands of cases, well within the skillset of most psychologists to analyze, in a matter of hours. In this article, we demystify web scraping methods as currently used to examine research questions of interest to psychologists. First, we introduce an approach called theory-driven web scraping in which the choice to use web-based big data must follow substantive theory. Second, we introduce data source theories , a term used to describe the assumptions a researcher must make about a prospective big data source in order to meaningfully scrape data from it. Critically, researchers must derive specific hypotheses to be tested based upon their data source theory, and if these hypotheses are not empirically supported, plans to use that data source should be changed or eliminated. Third, we provide a case study and sample code in Python demonstrating how web scraping can be conducted to collect big data along with links to a web tutorial designed for psychologists. Fourth, we describe a 4-step process to be followed in web scraping projects. Fifth and finally, we discuss legal, practical and ethical concerns faced when conducting web scraping projects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Communicating Science

    NASA Astrophysics Data System (ADS)

    Russell, Nicholas

    2009-10-01

    Introduction: what this book is about and why you might want to read it; Prologue: three orphans share a common paternity: professional science communication, popular journalism, and literary fiction are not as separate as they seem; Part I. Professional Science Communication: 1. Spreading the word: the endless struggle to publish professional science; 2. Walk like an Egyptian: the alien feeling of professional science writing; 3. The future's bright? Professional science communication in the age of the internet; 4. Counting the horse's teeth: professional standards in science's barter economy; 5. Separating the wheat from the chaff: peer review on trial; Part II. Science for the Public: What Science Do People Need and How Might They Get It?: 6. The Public Understanding of Science (PUS) movement and its problems; 7. Public engagement with science and technology (PEST): fine principle, difficult practice; 8. Citizen scientists? Democratic input into science policy; 9. Teaching and learning science in schools: implications for popular science communication; Part III. Popular Science Communication: The Press and Broadcasting: 10. What every scientist should know about mass media; 11. What every scientist should know about journalists; 12. The influence of new media; 13. How the media represents science; 14. How should science journalists behave?; Part IV. The Origins of Science in Cultural Context: Five Historic Dramas: 15. A terrible storm in Wittenberg: natural knowledge through sorcery and evil; 16. A terrible storm in the Mediterranean: controlling nature with white magic and religion; 17. Thieving magpies: the subtle art of false projecting; 18. Foolish virtuosi: natural philosophy emerges as a distinct discipline but many cannot take it seriously; 19. Is scientific knowledge 'true' or should it just be 'truthfully' deployed?; Part V. Science in Literature: 20. Science and the Gothic: the three big nineteenth-century monster stories; 21. Science fiction: serious literature of ideas or low-grade entertainment?; 22. Science in British literary fiction; 23. Science on stage: the politics and ethics of science in cultural and educational contexts.

  18. Initial-stage examination of a testbed for the big data transfer over parallel links. The SDN approach

    NASA Astrophysics Data System (ADS)

    Khoruzhnikov, S. E.; Grudinin, V. A.; Sadov, O. L.; Shevel, A. E.; Titov, V. B.; Kairkanov, A. B.

    2015-04-01

    The transfer of Big Data over a computer network is an important and unavoidable operation in the past, present, and in any feasible future. A large variety of astronomical projects produces the Big Data. There are a number of methods to transfer the data over a global computer network (Internet) with a range of tools. In this paper we consider the transfer of one piece of Big Data from one point in the Internet to another, in general over a long-range distance: many thousand kilometers. Several free of charge systems to transfer the Big Data are analyzed here. The most important architecture features are emphasized, and the idea is discussed to add the SDN OpenFlow protocol technique for fine-grain tuning of the data transfer process over several parallel data links.

  19. Assessing Teachers' Science Content Knowledge: A Strategy for Assessing Depth of Understanding

    NASA Astrophysics Data System (ADS)

    McConnell, Tom J.; Parker, Joyce M.; Eberhardt, Jan

    2013-06-01

    One of the characteristics of effective science teachers is a deep understanding of science concepts. The ability to identify, explain and apply concepts is critical in designing, delivering and assessing instruction. Because some teachers have not completed extensive courses in some areas of science, especially in middle and elementary grades, many professional development programs attempt to strengthen teachers' content knowledge. Assessing this content knowledge is challenging. Concept inventories are reliable and efficient, but do not reveal depth of knowledge. Interviews and observations are time-consuming. The Problem Based Learning Project for Teachers implemented a strategy that includes pre-post instruments in eight content strands that permits blind coding of responses and comparison across teachers and groups of teachers. The instruments include two types of open-ended questions that assess both general knowledge and the ability to apply Big Ideas related to specific science topics. The coding scheme is useful in revealing patterns in prior knowledge and learning, and identifying ideas that are challenging or not addressed by learning activities. The strengths and limitations of the scoring scheme are identified through comparison of the findings to case studies of four participating teachers from middle and elementary schools. The cases include examples of coded pre- and post-test responses to illustrate some of the themes seen in teacher learning. The findings raise questions for future investigation that can be conducted using analyses of the coded responses.

  20. Launch Pad Activities

    NASA Image and Video Library

    1959-09-08

    Big Joe Capsule Launch Pad Activities: This film covers both the Big Joe and a Little Joe Project Mercury flight test with a research and development version of the Mercury capsule. Big Joe was an Atlas missile that successfully launched a boilerplate model of the Mercury capsule on September 9, 1959. The lower half of the capsule was created at NASA Lewis. The scenes include coverage of the assembly and erection of the boosters, delivery of the capsules, mating of the capsules to the boosters, prelaunch views of the capsule and boosters on launchers, mission control, the launches, and recovery.

Top