Sample records for big big issue

  1. How Big Is Too Big?

    ERIC Educational Resources Information Center

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  2. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    NASA Astrophysics Data System (ADS)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  3. Rethinking big data: A review on the data quality and usage issues

    NASA Astrophysics Data System (ADS)

    Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng

    2016-05-01

    The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.

  4. "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes

    MedlinePlus

    ... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...

  5. Medical big data: promise and challenges.

    PubMed

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  6. Medical big data: promise and challenges

    PubMed Central

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-01-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994

  7. Comparative effectiveness research and big data: balancing potential with legal and ethical considerations.

    PubMed

    Gray, Elizabeth Alexandra; Thorpe, Jane Hyatt

    2015-01-01

    Big data holds big potential for comparative effectiveness research. The ability to quickly synthesize and use vast amounts of health data to compare medical interventions across settings of care, patient populations, payers and time will greatly inform efforts to improve quality, reduce costs and deliver more patient-centered care. However, the use of big data raises significant legal and ethical issues that may present barriers or limitations to the full potential of big data. This paper addresses the scope of some of these legal and ethical issues and how they may be managed effectively to fully realize the potential of big data.

  8. High School Mentors in Brief: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study. P/PV In Brief. Issue 8

    ERIC Educational Resources Information Center

    Jucovy, Linda; Herrera, Carla

    2009-01-01

    This issue of "Public/Private Ventures (P/PV) In Brief" is based on "High School Students as Mentors," a report that examined the efficacy of high school mentors using data from P/PV's large-scale random assignment impact study of Big Brothers Big Sisters school-based mentoring programs. The brief presents an overview of the findings, which…

  9. Big Data Analytics in Medicine and Healthcare.

    PubMed

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  10. Ethics and Epistemology in Big Data Research.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  11. Big Ideas in Primary Mathematics: Issues and Directions

    ERIC Educational Resources Information Center

    Askew, Mike

    2013-01-01

    This article is located within the literature arguing for attention to Big Ideas in teaching and learning mathematics for understanding. The focus is on surveying the literature of Big Ideas and clarifying what might constitute Big Ideas in the primary Mathematics Curriculum based on both theoretical and pragmatic considerations. This is…

  12. Big Data in the Industry - Overview of Selected Issues

    NASA Astrophysics Data System (ADS)

    Gierej, Sylwia

    2017-12-01

    This article reviews selected issues related to the use of Big Data in the industry. The aim is to define the potential scope and forms of using large data sets in manufacturing companies. By systematically reviewing scientific and professional literature, selected issues related to the use of mass data analytics in production were analyzed. A definition of Big Data was presented, detailing its main attributes. The importance of mass data processing technology in the development of Industry 4.0 concept has been highlighted. Subsequently, attention was paid to issues such as production process optimization, decision making and mass production individualisation, and indicated the potential for large volumes of data. As a result, conclusions were drawn regarding the potential of using Big Data in the industry.

  13. Translating Big Data into Smart Data for Veterinary Epidemiology.

    PubMed

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  14. "Small Steps, Big Rewards": Preventing Type 2 Diabetes

    MedlinePlus

    ... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...

  15. Using Big (and Critical) Data to Unmask Inequities in Community Colleges

    ERIC Educational Resources Information Center

    Rios-Aguilar, Cecilia

    2014-01-01

    This chapter presents various definitions of big data and examines some of the assumptions regarding the value and power of big data, especially as it relates to issues of equity in community colleges. Finally, this chapter ends with a discussion of the opportunities and challenges of using big data, critically, for institutional researchers.

  16. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  17. Quality of Big Data in Healthcare

    DOE PAGES

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    2015-01-01

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  18. Quality of Big Data in Healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  19. Issues in Big-Data Database Systems

    DTIC Science & Technology

    2014-06-01

    Post, 18 August 2013. Berman, Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier... Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier. 261pp. Characterization of

  20. Big(ger) Data as Better Data in Open Distance Learning

    ERIC Educational Resources Information Center

    Prinsloo, Paul; Archer, Elizabeth; Barnes, Glen; Chetty, Yuraisha; van Zyl, Dion

    2015-01-01

    In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously…

  1. Building a Smarter University: Big Data, Innovation, and Analytics. Critical Issues in Higher Education

    ERIC Educational Resources Information Center

    Lane, Jason E., Ed.

    2014-01-01

    The Big Data movement and the renewed focus on data analytics are transforming everything from healthcare delivery systems to the way cities deliver services to residents. Now is the time to examine how this Big Data could help build smarter universities. While much of the cutting-edge research that is being done with Big Data is happening at…

  2. Using Ethical Reasoning to Amplify the Reach and Resonance of Professional Codes of Conduct in Training Big Data Scientists.

    PubMed

    Tractenberg, Rochelle E; Russell, Andrew J; Morgan, Gregory J; FitzGerald, Kevin T; Collmann, Jeff; Vinsel, Lee; Steinmann, Michael; Dolling, Lisa M

    2015-12-01

    The use of Big Data--however the term is defined--involves a wide array of issues and stakeholders, thereby increasing numbers of complex decisions around issues including data acquisition, use, and sharing. Big Data is becoming a significant component of practice in an ever-increasing range of disciplines; however, since it is not a coherent "discipline" itself, specific codes of conduct for Big Data users and researchers do not exist. While many institutions have created, or will create, training opportunities (e.g., degree programs, workshops) to prepare people to work in and around Big Data, insufficient time, space, and thought have been dedicated to training these people to engage with the ethical, legal, and social issues in this new domain. Since Big Data practitioners come from, and work in, diverse contexts, neither a relevant professional code of conduct nor specific formal ethics training are likely to be readily available. This normative paper describes an approach to conceptualizing ethical reasoning and integrating it into training for Big Data use and research. Our approach is based on a published framework that emphasizes ethical reasoning rather than topical knowledge. We describe the formation of professional community norms from two key disciplines that contribute to the emergent field of Big Data: computer science and statistics. Historical analogies from these professions suggest strategies for introducing trainees and orienting practitioners both to ethical reasoning and to a code of professional conduct itself. We include two semester course syllabi to strengthen our thesis that codes of conduct (including and beyond those we describe) can be harnessed to support the development of ethical reasoning in, and a sense of professional identity among, Big Data practitioners.

  3. Toward a Literature-Driven Definition of Big Data in Healthcare

    PubMed Central

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  4. Toward a Literature-Driven Definition of Big Data in Healthcare.

    PubMed

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  5. Big Data and Neuroimaging.

    PubMed

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  6. Big Geo Data Management: AN Exploration with Social Media and Telecommunications Open Data

    NASA Astrophysics Data System (ADS)

    Arias Munoz, C.; Brovelli, M. A.; Corti, S.; Zamboni, G.

    2016-06-01

    The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.

  7. Water issues and agriculture - the view from 30,000 feet

    USDA-ARS?s Scientific Manuscript database

    There are different perspectives on the big picture of water issues especially as they relate to water use, and watershed planning considerations. The big picture of water issues can be couched in terms that the general public can understand, rather than an academic, statistically laden presentation...

  8. Big data in psychology: Introduction to the special issue.

    PubMed

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Big Data in Psychology: Introduction to Special Issue

    PubMed Central

    Harlow, Lisa L.; Oswald, Frederick L.

    2016-01-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. PMID:27918177

  10. A Framework for Identifying and Analyzing Major Issues in Implementing Big Data and Data Analytics in E-Learning: Introduction to Special Issue on Big Data and Data Analytics

    ERIC Educational Resources Information Center

    Corbeil, Maria Elena; Corbeil, Joseph Rene; Khan, Badrul H.

    2017-01-01

    Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educational institutions to gain new insights into how people learn (Kumar, 2013). E-learning has become an important part of education, and this form of learning is especially suited to the use of big data and data analysis,…

  11. 76 FR 64341 - Big Sandy Pipeline, LLC; Notice of Cost and Revenue Study

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP06-275-003] Big Sandy Pipeline, LLC; Notice of Cost and Revenue Study Take notice that on April 8, 2011, Big Sandy Pipeline, LLC filed its cost and revenue study in compliance with the Commission's November 15, 2006 Order Issuing...

  12. Considerations on Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    LIU, Zhen; GUO, Huadong; WANG, Changlin

    2016-11-01

    Geospatial data, as a significant portion of big data, has recently gained the full attention of researchers. However, few researchers focus on the evolution of geospatial data and its scientific research methodologies. When entering into the big data era, fully understanding the changing research paradigm associated with geospatial data will definitely benefit future research on big data. In this paper, we look deep into these issues by examining the components and features of geospatial big data, reviewing relevant scientific research methodologies, and examining the evolving pattern of geospatial data in the scope of the four ‘science paradigms’. This paper proposes that geospatial big data has significantly shifted the scientific research methodology from ‘hypothesis to data’ to ‘data to questions’ and it is important to explore the generality of growing geospatial data ‘from bottom to top’. Particularly, four research areas that mostly reflect data-driven geospatial research are proposed: spatial correlation, spatial analytics, spatial visualization, and scientific knowledge discovery. It is also pointed out that privacy and quality issues of geospatial data may require more attention in the future. Also, some challenges and thoughts are raised for future discussion.

  13. Translating Big Data into Smart Data for Veterinary Epidemiology

    PubMed Central

    VanderWaal, Kimberly; Morrison, Robert B.; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M.

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues. PMID:28770216

  14. Toward a manifesto for the 'public understanding of big data'.

    PubMed

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  15. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    PubMed

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  16. Challenges and Opportunities of Big Data in Health Care: A Systematic Review

    PubMed Central

    Goswamy, Rishi; Raval, Yesha; Marawi, Sarah

    2016-01-01

    Background Big data analytics offers promise in many business sectors, and health care is looking at big data to provide answers to many age-related issues, particularly dementia and chronic disease management. Objective The purpose of this review was to summarize the challenges faced by big data analytics and the opportunities that big data opens in health care. Methods A total of 3 searches were performed for publications between January 1, 2010 and January 1, 2016 (PubMed/MEDLINE, CINAHL, and Google Scholar), and an assessment was made on content germane to big data in health care. From the results of the searches in research databases and Google Scholar (N=28), the authors summarized content and identified 9 and 14 themes under the categories Challenges and Opportunities, respectively. We rank-ordered and analyzed the themes based on the frequency of occurrence. Results The top challenges were issues of data structure, security, data standardization, storage and transfers, and managerial skills such as data governance. The top opportunities revealed were quality improvement, population management and health, early detection of disease, data quality, structure, and accessibility, improved decision making, and cost reduction. Conclusions Big data analytics has the potential for positive impact and global implications; however, it must overcome some legitimate obstacles. PMID:27872036

  17. Big Data is a powerful tool for environmental improvements in the construction business

    NASA Astrophysics Data System (ADS)

    Konikov, Aleksandr; Konikov, Gregory

    2017-10-01

    The work investigates the possibility of applying the Big Data method as a tool to implement environmental improvements in the construction business. The method is recognized as effective in analyzing big volumes of heterogeneous data. It is noted that all preconditions exist for this method to be successfully used for resolution of environmental issues in the construction business. It is proven that the principal Big Data techniques (cluster analysis, crowd sourcing, data mixing and integration) can be applied in the sphere in question. It is concluded that Big Data is a truly powerful tool to implement environmental improvements in the construction business.

  18. Information Literacy for Health Professionals: Teaching Essential Information Skills with the Big6 Information Literacy Model

    ERIC Educational Resources Information Center

    Santana Arroyo, Sonia

    2013-01-01

    Health professionals frequently do not possess the necessary information-seeking abilities to conduct an effective search in databases and Internet sources. Reference librarians may teach health professionals these information and technology skills through the Big6 information literacy model (Big6). This article aims to address this issue. It also…

  19. Opening the Black Box: Understanding the Science Behind Big Data and Predictive Analytics.

    PubMed

    Hofer, Ira S; Halperin, Eran; Cannesson, Maxime

    2018-05-25

    Big data, smart data, predictive analytics, and other similar terms are ubiquitous in the lay and scientific literature. However, despite the frequency of usage, these terms are often poorly understood, and evidence of their disruption to clinical care is hard to find. This article aims to address these issues by first defining and elucidating the term big data, exploring the ways in which modern medical data, both inside and outside the electronic medical record, meet the established definitions of big data. We then define the term smart data and discuss the transformations necessary to make big data into smart data. Finally, we examine the ways in which this transition from big to smart data will affect what we do in research, retrospective work, and ultimately patient care.

  20. Integrative methods for analyzing big data in precision medicine.

    PubMed

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The Future of Big-City Schools; Desegregation Policies and Magnet Alternatives.

    ERIC Educational Resources Information Center

    Levine, Daniel U., Ed.; Havighurst, Robert J., Ed.

    This book provides an in-depth analysis of urban education and related issues. The issues examined are not only fundamentally important for urban education, but in addition, several issues that have recently become prominent in considering the future of big cities are discussed. For instance, the effects of desegregation on middle class enrollment…

  2. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  3. Stochastic dynamics and the predictability of big hits in online videos.

    PubMed

    Miotto, José M; Kantz, Holger; Altmann, Eduardo G

    2017-03-01

    The competition for the attention of users is a central element of the Internet. Crucial issues are the origin and predictability of big hits, the few items that capture a big portion of the total attention. We address these issues analyzing 10^{6} time series of videos' views from YouTube. We find that the average gain of views is linearly proportional to the number of views a video already has, in agreement with usual rich-get-richer mechanisms and Gibrat's law, but this fails to explain the prevalence of big hits. The reason is that the fluctuations around the average views are themselves heavy tailed. Based on these empirical observations, we propose a stochastic differential equation with Lévy noise as a model of the dynamics of videos. We show how this model is substantially better in estimating the probability of an ordinary item becoming a big hit, which is considerably underestimated in the traditional proportional-growth models.

  4. Stochastic dynamics and the predictability of big hits in online videos

    NASA Astrophysics Data System (ADS)

    Miotto, José M.; Kantz, Holger; Altmann, Eduardo G.

    2017-03-01

    The competition for the attention of users is a central element of the Internet. Crucial issues are the origin and predictability of big hits, the few items that capture a big portion of the total attention. We address these issues analyzing 106 time series of videos' views from YouTube. We find that the average gain of views is linearly proportional to the number of views a video already has, in agreement with usual rich-get-richer mechanisms and Gibrat's law, but this fails to explain the prevalence of big hits. The reason is that the fluctuations around the average views are themselves heavy tailed. Based on these empirical observations, we propose a stochastic differential equation with Lévy noise as a model of the dynamics of videos. We show how this model is substantially better in estimating the probability of an ordinary item becoming a big hit, which is considerably underestimated in the traditional proportional-growth models.

  5. Potentiality of Big Data in the Medical Sector: Focus on How to Reshape the Healthcare System

    PubMed Central

    Jee, Kyoungyoung

    2013-01-01

    Objectives The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. Methods By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Results Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. Conclusions First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security. PMID:23882412

  6. Potentiality of big data in the medical sector: focus on how to reshape the healthcare system.

    PubMed

    Jee, Kyoungyoung; Kim, Gang-Hoon

    2013-06-01

    The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security.

  7. Differential Privacy Preserving in Big Data Analytics for Connected Health.

    PubMed

    Lin, Chi; Song, Zihao; Song, Houbing; Zhou, Yanhong; Wang, Yi; Wu, Guowei

    2016-04-01

    In Body Area Networks (BANs), big data collected by wearable sensors usually contain sensitive information, which is compulsory to be appropriately protected. Previous methods neglected privacy protection issue, leading to privacy exposure. In this paper, a differential privacy protection scheme for big data in body sensor network is developed. Compared with previous methods, this scheme will provide privacy protection with higher availability and reliability. We introduce the concept of dynamic noise thresholds, which makes our scheme more suitable to process big data. Experimental results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy.

  8. Quantum nature of the big bang.

    PubMed

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  9. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  10. The Role of Higher Education in the 21st Century: Collaborator or Counterweight?

    ERIC Educational Resources Information Center

    Castagnera, James Ottavio

    2001-01-01

    Suggests that higher education should fill the vacuum left by big labor. Asserts that to do this, higher education must become adept at shifting from the right foot of collaboration with big business and big government to the left foot of confrontation, even at the price of lost corporate or government support, when the issue is academic freedom…

  11. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  12. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  13. Challenges and Opportunities of Big Data in Health Care: A Systematic Review.

    PubMed

    Kruse, Clemens Scott; Goswamy, Rishi; Raval, Yesha; Marawi, Sarah

    2016-11-21

    Big data analytics offers promise in many business sectors, and health care is looking at big data to provide answers to many age-related issues, particularly dementia and chronic disease management. The purpose of this review was to summarize the challenges faced by big data analytics and the opportunities that big data opens in health care. A total of 3 searches were performed for publications between January 1, 2010 and January 1, 2016 (PubMed/MEDLINE, CINAHL, and Google Scholar), and an assessment was made on content germane to big data in health care. From the results of the searches in research databases and Google Scholar (N=28), the authors summarized content and identified 9 and 14 themes under the categories Challenges and Opportunities, respectively. We rank-ordered and analyzed the themes based on the frequency of occurrence. The top challenges were issues of data structure, security, data standardization, storage and transfers, and managerial skills such as data governance. The top opportunities revealed were quality improvement, population management and health, early detection of disease, data quality, structure, and accessibility, improved decision making, and cost reduction. Big data analytics has the potential for positive impact and global implications; however, it must overcome some legitimate obstacles. ©Clemens Scott Kruse, Rishi Goswamy, Yesha Raval, Sarah Marawi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 21.11.2016.

  14. Big data for health.

    PubMed

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  15. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  16. Internet of things and Big Data as potential solutions to the problems in waste electrical and electronic equipment management: An exploratory study.

    PubMed

    Gu, Fu; Ma, Buqing; Guo, Jianfeng; Summers, Peter A; Hall, Philip

    2017-10-01

    Management of Waste Electrical and Electronic Equipment (WEEE) is a vital part in solid waste management, there are still some difficult issues require attentionss. This paper investigates the potential of applying Internet of Things (IoT) and Big Data as the solutions to the WEEE management problems. The massive data generated during the production, consumption and disposal of Electrical and Electronic Equipment (EEE) fits the characteristics of Big Data. Through using the state-of-the-art communication technologies, the IoT derives the WEEE "Big Data" from the life cycle of EEE, and the Big Data technologies process the WEEE "Big Data" for supporting decision making in WEEE management. The framework of implementing the IoT and the Big Data technologies is proposed, with its multiple layers are illustrated. Case studies with the potential application scenarios of the framework are presented and discussed. As an unprecedented exploration, the combined application of the IoT and the Big Data technologies in WEEE management brings a series of opportunities as well as new challenges. This study provides insights and visions for stakeholders in solving the WEEE management problems under the context of IoT and Big Data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  18. [Big data analysis and evidence-based medicine: controversy or cooperation].

    PubMed

    Chen, Xinzu; Hu, Jiankun

    2016-01-01

    The development of evidence-based medicince should be an important milestone from the empirical medicine to the evidence-driving modern medicine. With the outbreak in biomedical data, the rising big data analysis can efficiently solve exploratory questions or decision-making issues in biomedicine and healthcare activities. The current problem in China is that big data analysis is still not well conducted and applied to deal with problems such as clinical decision-making, public health policy, and should not be a debate whether big data analysis can replace evidence-based medicine or not. Therefore, we should clearly understand, no matter whether evidence-based medicine or big data analysis, the most critical infrastructure must be the substantial work in the design, constructure and collection of original database in China.

  19. [Big data and their perspectives in radiation therapy].

    PubMed

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  20. Big Data for Infectious Disease Surveillance and Modeling

    PubMed Central

    Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile

    2016-01-01

    We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. PMID:28830113

  1. Big data analysis framework for healthcare and social sectors in Korea.

    PubMed

    Song, Tae-Min; Ryu, Seewon

    2015-01-01

    We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached.

  2. [Infant botulism].

    PubMed

    Falk, Absalom; Afriat, Amichay; Hubary, Yechiel; Herzog, Lior; Eisenkraft, Arik

    2014-01-01

    Infant botulism is a paralytic syndrome which manifests as a result of ingesting spores of the toxin secreting bacterium Clostridium botulinum by infants. As opposed to botulism in adults, treating infant botulism with horse antiserum was not approved due to several safety issues. This restriction has led to the development of Human Botulism Immune Globulin Intravenous (BIG-IV; sells under BabyBIG). In this article we review infant botulism and the advantages of treating it with BIG-IV.

  3. Comment from the Editor to the Special Issue: “Big Data and Precision Medicine Series I: Lung Cancer Early Diagnosis”

    PubMed Central

    Spaggiari, Lorenzo

    2018-01-01

    With this Editorial we want to present the Special Issue “Big Data and Precision Medicine Series I: Lung Cancer Early Diagnosis” to the scientific community, which aims to gather experts on the early detection of lung cancer in order to implement common efforts in the fight against cancer. PMID:29425180

  4. Satellite Telemetry and Command using Big LEO Mobile Telecommunications Systems

    NASA Technical Reports Server (NTRS)

    Huegel, Fred

    1998-01-01

    Various issues associated with satellite telemetry and command using Big LEO mobile telecommunications systems are presented in viewgraph form. Specific topics include: 1) Commercial Satellite system overviews: Globalstar, ICO, and Iridium; 2) System capabilities and cost reduction; 3) Satellite constellations and contact limitations; 4) Capabilities of Globalstar, ICO and Iridium with emphasis on Globalstar; and 5) Flight transceiver issues and security.

  5. Big data and clinicians: a review on the state of the science.

    PubMed

    Wang, Weiqi; Krishnan, Eswar

    2014-01-17

    In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data.

  6. The Challenge of Handling Big Data Sets in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Autermann, Christian; Stasch, Christoph; Jirka, Simon

    2016-04-01

    More and more Sensor Web components are deployed in different domains such as hydrology, oceanography or air quality in order to make observation data accessible via the Web. However, besides variability of data formats and protocols in environmental applications, the fast growing volume of data with high temporal and spatial resolution is imposing new challenges for Sensor Web technologies when sharing observation data and metadata about sensors. Variability, volume and velocity are the core issues that are addressed by Big Data concepts and technologies. Most solutions in the geospatial sector focus on remote sensing and raster data, whereas big in-situ observation data sets relying on vector features require novel approaches. Hence, in order to deal with big data sets in infrastructures for observational data, the following questions need to be answered: 1. How can big heterogeneous spatio-temporal datasets be organized, managed, and provided to Sensor Web applications? 2. How can views on big data sets and derived information products be made accessible in the Sensor Web? 3. How can big observation data sets be processed efficiently? We illustrate these challenges with examples from the marine domain and outline how we address these challenges. We therefore show how big data approaches from mainstream IT can be re-used and applied to Sensor Web application scenarios.

  7. Based Real Time Remote Health Monitoring Systems: A Review on Patients Prioritization and Related "Big Data" Using Body Sensors information and Communication Technology.

    PubMed

    Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Muzammil, H

    2017-12-29

    The growing worldwide population has increased the need for technologies, computerised software algorithms and smart devices that can monitor and assist patients anytime and anywhere and thus enable them to lead independent lives. The real-time remote monitoring of patients is an important issue in telemedicine. In the provision of healthcare services, patient prioritisation poses a significant challenge because of the complex decision-making process it involves when patients are considered 'big data'. To our knowledge, no study has highlighted the link between 'big data' characteristics and real-time remote healthcare monitoring in the patient prioritisation process, as well as the inherent challenges involved. Thus, we present comprehensive insights into the elements of big data characteristics according to the six 'Vs': volume, velocity, variety, veracity, value and variability. Each of these elements is presented and connected to a related part in the study of the connection between patient prioritisation and real-time remote healthcare monitoring systems. Then, we determine the weak points and recommend solutions as potential future work. This study makes the following contributions. (1) The link between big data characteristics and real-time remote healthcare monitoring in the patient prioritisation process is described. (2) The open issues and challenges for big data used in the patient prioritisation process are emphasised. (3) As a recommended solution, decision making using multiple criteria, such as vital signs and chief complaints, is utilised to prioritise the big data of patients with chronic diseases on the basis of the most urgent cases.

  8. Contracting for Agile Software Development in the Department of Defense: An Introduction

    DTIC Science & Technology

    2015-08-01

    Requirements are fixed at a more granular level; reviews of the work product happen more frequently and assess each individual increment rather than a “ big bang ...boundaries than “ big - bang ” development. The implementation of incremental or progressive reviews enables just that—any issues identified at the time of the...the contract needs to support the delivery of deployable software at defined increments/intervals, rather than incentivizing “ big - bang ” efforts or

  9. Big Data and Biomedical Informatics: A Challenging Opportunity

    PubMed Central

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  10. Big Data in Healthcare - Defining the Digital Persona through User Contexts from the Micro to the Macro. Contribution of the IMIA Organizational and Social Issues WG.

    PubMed

    Kuziemsky, C E; Monkman, H; Petersen, C; Weber, J; Borycki, E M; Adams, S; Collins, S

    2014-08-15

    While big data offers enormous potential for improving healthcare delivery, many of the existing claims concerning big data in healthcare are based on anecdotal reports and theoretical vision papers, rather than scientific evidence based on empirical research. Historically, the implementation of health information technology has resulted in unintended consequences at the individual, organizational and social levels, but these unintended consequences of collecting data have remained unaddressed in the literature on big data. The objective of this paper is to provide insights into big data from the perspective of people, social and organizational considerations. We draw upon the concept of persona to define the digital persona as the intersection of data, tasks and context for different user groups. We then describe how the digital persona can serve as a framework to understanding sociotechnical considerations of big data implementation. We then discuss the digital persona in the context of micro, meso and macro user groups across the 3 Vs of big data. We provide insights into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery. We also provide a framework for defining the digital persona at a micro, meso and macro level to help understand the user contexts of big data solutions. While big data provides great potential for improving healthcare delivery, it is essential that we consider the individual, social and organizational contexts of data use when implementing big data solutions.

  11. Big data and biomedical informatics: a challenging opportunity.

    PubMed

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  12. Reseeding big sagebrush: Techniques and issues

    Treesearch

    Nancy L. Shaw; Ann M. DeBolt; Roger Rosentreter

    2005-01-01

    Reestablishing big sagebrush on rangelands now dominated by native perennial grasses, introduced perennial grasses, or exotic annual grasses, particularly cheatgrass (Bromus tectorum), serves to stabilize soil, improve moisture availability and nutrient recyling, increase biological diversity, and foster community stability and resiliency. A first...

  13. m-Health 2.0: New perspectives on mobile health, machine learning and big data analytics.

    PubMed

    Istepanian, Robert S H; Al-Anzi, Turki

    2018-06-08

    Mobile health (m-Health) has been repeatedly called the biggest technological breakthrough of our modern times. Similarly, the concept of big data in the context of healthcare is considered one of the transformative drivers for intelligent healthcare delivery systems. In recent years, big data has become increasingly synonymous with mobile health, however key challenges of 'Big Data and mobile health', remain largely untackled. This is becoming particularly important with the continued deluge of the structured and unstructured data sets generated on daily basis from the proliferation of mobile health applications within different healthcare systems and products globally. The aim of this paper is of twofold. First we present the relevant big data issues from the mobile health (m-Health) perspective. In particular we discuss these issues from the technological areas and building blocks (communications, sensors and computing) of mobile health and the newly defined (m-Health 2.0) concept. The second objective is to present the relevant rapprochement issues of big m-Health data analytics with m-Health. Further, we also present the current and future roles of machine and deep learning within the current smart phone centric m-health model. The critical balance between these two important areas will depend on how different stakeholder from patients, clinicians, healthcare providers, medical and m-health market businesses and regulators will perceive these developments. These new perspectives are essential for better understanding the fine balance between the new insights of how intelligent and connected the future mobile health systems will look like and the inherent risks and clinical complexities associated with the big data sets and analytical tools used in these systems. These topics will be subject for extensive work and investigations in the foreseeable future for the areas of data analytics, computational and artificial intelligence methods applied for mobile health. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Big data in forensic science and medicine.

    PubMed

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Big Data Analysis Framework for Healthcare and Social Sectors in Korea

    PubMed Central

    Song, Tae-Min

    2015-01-01

    Objectives We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. Methods We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Results Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. Conclusions There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached. PMID:25705552

  16. Challenges and potential solutions for big data implementations in developing countries.

    PubMed

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  17. U.S. strategic petroleum reserve Big Hill 114 leak analysis 2012.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, David L.; Roberts, Barry L.; Lord, Anna C. Snider

    2013-06-01

    This report addresses recent well integrity issues related to cavern 114 at the Big Hill Strategic Petroleum Reserve site. DM Petroleum Operations, M&O contractor for the U.S. Strategic Petroleum Reserve, recognized an apparent leak in Big Hill cavern well 114A in late summer, 2012, and provided written notice to the State of Texas as required by law. DM has since isolated the leak in well A with a temporary plug, and is planning on remediating both 114 A- and B-wells with liners. In this report Sandia provides an analysis of the apparent leak that includes: (i) estimated leak volume, (ii)more » recommendation for operating pressure to maintain in the cavern between temporary and permanent fixes for the well integrity issues, and (iii) identification of other caverns or wells at Big Hill that should be monitored closely in light of the sequence of failures there in the last several years.« less

  18. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness.

    PubMed

    Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen

    2016-10-17

    The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term "Big Data", which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.

  19. Big Data and Clinicians: A Review on the State of the Science

    PubMed Central

    Wang, Weiqi

    2014-01-01

    Background In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. Objective The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. Methods We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. Results This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Conclusions Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data. PMID:25600256

  20. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness

    PubMed Central

    Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen

    2016-01-01

    The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing. PMID:27763525

  1. [Big data in official statistics].

    PubMed

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  2. From big data to deep insight in developmental science.

    PubMed

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  3. 76 FR 59420 - Proposed Information Collection; Alaska Guide Service Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-26

    ... Office of Management and Budget (OMB) to approve the information collection (IC) described below. As... lands, we issue permits for commercial guide services, including big game hunting, sport fishing... information during the competitive selection process for big game and sport fishing guide permits to evaluate...

  4. [Chapter 1. From the study of risks to the translation of the ethical issues of Big Data in Health].

    PubMed

    Béranger, J

    2017-10-27

    Big Data substantially disrupt the medical microcosm up to challenge the paradigms of Medicine Hippocrates as we previously know. Therefore, a reflection on the study of risks associated with ethical issues around personal health data is imposed on us. Our study is based on many field surveys, interviews targeted at different actors, as well as a literature search on the subject. This work led to the realization of an innovative method of alignment of concepts of ontology of the risks to those of ontology of ethical objectives requirements of Big Data in Health. The aim is to make sense and recommendations to realization, implementation and use of personal data in order to better control it.

  5. Medical Big Data Warehouse: Architecture and System Design, a Case Study: Improving Healthcare Resources Distribution.

    PubMed

    Sebaa, Abderrazak; Chikh, Fatima; Nouicer, Amina; Tari, AbdelKamel

    2018-02-19

    The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.

  6. Challenges and Potential Solutions for Big Data Implementations in Developing Countries

    PubMed Central

    Mayan, J.C; García, M.J.; Almerares, A.A.; Househ, M.

    2014-01-01

    Summary Background The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. Objectives To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. Methods A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: “big data”, “developing countries”, “data mining”, “health information systems”, and “computing methodologies”. A thematic review of selected articles was performed. Results There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. Conclusions The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs. PMID:25123719

  7. Contemporary Research Discourse and Issues on Big Data in Higher Education

    ERIC Educational Resources Information Center

    Daniel, Ben

    2017-01-01

    The increasing availability of digital data in higher education provides an extraordinary resource for researchers to undertake educational research, targeted at understanding challenges facing the sector. Big data can stimulate new ways to transform processes relating to learning and teaching, and helps identify useful data, sources of evidence…

  8. Commentary:Deja vu All Over Again: What Will It Take To Solve Big Instructional Problems.

    ERIC Educational Resources Information Center

    Ysseldyke, Jim

    2000-01-01

    Presents a response to "School Psychology from an Instructional Perspective: Solving Big, Not Little Problems" (this issue). The author supports Shapiro's arguments but worries much about the barriers that would have to be overcome to enable such a paradigm shift to occur. (GCP)

  9. 78 FR 26067 - General Management Plan, Draft Environmental Impact Statement, Big Thicket National Preserve, Texas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-03

    .... Alternative 2, the NPS preferred alternative, would support a broad ecosystem approach for preserve management... management of cross-boundary resource issues and the importance of encouraging partnerships to address and... Management Plan, Draft Environmental Impact Statement, Big Thicket National Preserve, Texas AGENCY: National...

  10. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. [Applications of eco-environmental big data: Progress and prospect].

    PubMed

    Zhao, Miao Miao; Zhao, Shi Cheng; Zhang, Li Yun; Zhao, Fen; Shao, Rui; Liu, Li Xiang; Zhao, Hai Feng; Xu, Ming

    2017-05-18

    With the advance of internet and wireless communication technology, the fields of ecology and environment have entered a new digital era with the amount of data growing explosively and big data technologies attracting more and more attention. The eco-environmental big data is based airborne and space-/land-based observations of ecological and environmental factors and its ultimate goal is to integrate multi-source and multi-scale data for information mining by taking advantages of cloud computation, artificial intelligence, and modeling technologies. In comparison with other fields, the eco-environmental big data has its own characteristics, such as diverse data formats and sources, data collected with various protocols and standards, and serving different clients and organizations with special requirements. Big data technology has been applied worldwide in ecological and environmental fields including global climate prediction, ecological network observation and modeling, and regional air pollution control. The development of eco-environmental big data in China is facing many problems, such as data sharing issues, outdated monitoring facilities and techno-logies, and insufficient data mining capacity. Despite all this, big data technology is critical to solving eco-environmental problems, improving prediction and warning accuracy on eco-environmental catastrophes, and boosting scientific research in the field in China. We expected that the eco-environmental big data would contribute significantly to policy making and environmental services and management, and thus the sustainable development and eco-civilization construction in China in the coming decades.

  12. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  13. Empowerment, Participation, and Democracy? -- The Hong Kong Big Sisters' Guidance Programme.

    ERIC Educational Resources Information Center

    Bottery, Mike; Siu, Shun-Mei

    1996-01-01

    Asserts that the Big Sisters Programme in Hong Kong provides a good example of a scheme that transcends personal and school issues and facilitates a more participative and democratic view of society. Characterizes the program as a benign form of a "hidden curriculum" and recommends establishing it in secondary schools. (MJP)

  14. Who Prophets from Big Data in Education? New Insights and New Challenges

    ERIC Educational Resources Information Center

    Lynch, Collin F.

    2017-01-01

    Big Data can radically transform education by enabling personalized learning, deep student modeling, and true longitudinal studies that compare changes across classrooms, regions, and years. With these promises, however, come risks to individual privacy and educational validity, along with deep policy and ethical issues. Education is largely a…

  15. Early Childhood Education and Care as a Community Service or Big Business?

    ERIC Educational Resources Information Center

    Kilderry, Anna

    2006-01-01

    This colloquium discusses recent trends where early childhood education and care has shifted from being a community service to that of big business. Years of neo-liberal reform have created market conditions favourable for large corporations to provide childcare within Australia. This situation raises some issues and concerns, particularly in…

  16. Impacts of fire on hydrology and erosion in steep mountain big sagebrush communities

    Treesearch

    Frederick B. Pierson; Peter R. Robichaud; Kenneth E. Spaeth; Corey A. Moffet

    2003-01-01

    Wildfire is an important ecological process and management issue on western rangelands. Major unknowns associated with wildfire are its affects on vegetation and soil conditions that influence hydrologic processes including infiltration, surface runoff, erosion, sediment transport, and flooding. Post wildfire hydrologic response was studied in big sagebrush plant...

  17. After the Big Bang: What's Next in Design Education? Time to Relax?

    ERIC Educational Resources Information Center

    Fleischmann, Katja

    2015-01-01

    The article "Big Bang technology: What's next in design education, radical innovation or incremental change?" (Fleischmann, 2013) appeared in the "Journal of Learning Design" Volume 6, Issue 3 in 2013. Two years on, Associate Professor Fleischmann reflects upon her original article within this article. Although it has only been…

  18. Internet-based brain training games, citizen scientists, and big data: ethical issues in unprecedented virtual territories.

    PubMed

    Purcell, Ryan H; Rommelfanger, Karen S

    2015-04-22

    Internet brain training programs, where consumers serve as both subjects and funders of the research, represent the closest engagement many individuals have with neuroscience. Safeguards are needed to protect participants' privacy and the evolving scientific enterprise of big data. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Big Data for Infectious Disease Surveillance and Modeling.

    PubMed

    Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile

    2016-12-01

    We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Big data: survey, technologies, opportunities, and challenges.

    PubMed

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  1. Big Data: Survey, Technologies, Opportunities, and Challenges

    PubMed Central

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  2. Quality of Big Data in health care.

    PubMed

    Sukumar, Sreenivas R; Natarajan, Ramachandran; Ferrell, Regina K

    2015-01-01

    The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.

  3. [Big Data- challenges and risks].

    PubMed

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  4. Big Data in Caenorhabditis elegans: quo vadis?

    PubMed Central

    Hutter, Harald; Moerman, Donald

    2015-01-01

    A clear definition of what constitutes “Big Data” is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of “complete” data sets for this organism is actually rather small—not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein–protein interaction—important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. PMID:26543198

  5. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    PubMed Central

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  6. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives. Contribution of the IMIA Social Media Working Group.

    PubMed

    Hansen, M M; Miron-Shatz, T; Lau, A Y S; Paton, C

    2014-08-15

    As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to "small data" would also be useful.

  7. The "Beauty Is Good" for Children with Autism Spectrum Disorders Too

    ERIC Educational Resources Information Center

    Fonseca, D. Da; Santos, A.; Rosset, D.; Deruelle, C.

    2011-01-01

    The "beauty is good" (BIG) stereotype is a robust and extensively documented social stereotype. While one may think that children with autism are impervious to the BIG stereotype, given their remarkable difficulties in the social sphere, this issue has not yet been addressed. We have asked 18 children with autism to judge how friendly and…

  8. 76 FR 3653 - 2011 Meetings of the Big Cypress National Preserve Off-Road Vehicle (ORV) Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... of the Big Cypress National Preserve Off-Road Vehicle (ORV) Advisory Committee AGENCY: Department of... Off-road Vehicle Management Plan and the Federal Advisory Committee Act of 1972 (5 U.S.C. Appendix) to examine issues and make recommendations regarding the management of off-road vehicles (ORVs) in the...

  9. A Big Bang Lab

    ERIC Educational Resources Information Center

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  10. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  11. Big questions come in bundles, hence they should be tackled systemically.

    PubMed

    Bunge, Mario

    2014-01-01

    A big problem is, by definition, one involving either multiple traits of a thing, or a collection of things. Because such problems are systemic or global rather than local or sectoral, they call for a systemic approach, and often a multidisciplinary one as well. Just think of individual and public health problems, or of income inequality, gender discrimination, housing, environmental, or political participation issues. All of them are inter-related, so that focusing on one of them at a time is bound to produce either short-term solutions or utter failure. This is also why single-issue political movements are bound to fail. In other words, big problems are systemic and must, therefore be approached systemically--though with the provisos that systemism must not be confused with holism and that synthesis complements analysis instead of replacing it.

  12. A study of pricing and trading model of Blockchain & Big data-based Energy-Internet electricity

    NASA Astrophysics Data System (ADS)

    Fan, Tao; He, Qingsu; Nie, Erbao; Chen, Shaozhen

    2018-01-01

    The development of Energy-Internet is currently suffering from a series of issues, such as the conflicts among high capital requirement, low-cost, high efficiency, the spreading gap between capital demand and supply, as well as the lagged trading & valuation mechanism, any of which would hinder Energy-Internet's evolution. However, with the development of Blockchain and big-data technology, it is possible to work out solutions for these issues. Based on current situation of Energy-Internet and its requirements for future progress, this paper demonstrates the validity of employing blockchain technology to solve the problems encountered by Energy-Internet during its development. It proposes applying the blockchain and big-data technologies to pricing and trading energy products through Energy-Internet and to accomplish cyber-based energy or power's transformation from physic products to financial assets.

  13. From Big Data to Knowledge in the Social Sciences.

    PubMed

    Hesse, Bradford W; Moser, Richard P; Riley, William T

    2015-05-01

    One of the challenges associated with high-volume, diverse datasets is whether synthesis of open data streams can translate into actionable knowledge. Recognizing that challenge and other issues related to these types of data, the National Institutes of Health developed the Big Data to Knowledge or BD2K initiative. The concept of translating "big data to knowledge" is important to the social and behavioral sciences in several respects. First, a general shift to data-intensive science will exert an influence on all scientific disciplines, but particularly on the behavioral and social sciences given the wealth of behavior and related constructs captured by big data sources. Second, science is itself a social enterprise; by applying principles from the social sciences to the conduct of research, it should be possible to ameliorate some of the systemic problems that plague the scientific enterprise in the age of big data. We explore the feasibility of recalibrating the basic mechanisms of the scientific enterprise so that they are more transparent and cumulative; more integrative and cohesive; and more rapid, relevant, and responsive.

  14. From Big Data to Knowledge in the Social Sciences

    PubMed Central

    Hesse, Bradford W.; Moser, Richard P.; Riley, William T.

    2015-01-01

    One of the challenges associated with high-volume, diverse datasets is whether synthesis of open data streams can translate into actionable knowledge. Recognizing that challenge and other issues related to these types of data, the National Institutes of Health developed the Big Data to Knowledge or BD2K initiative. The concept of translating “big data to knowledge” is important to the social and behavioral sciences in several respects. First, a general shift to data-intensive science will exert an influence on all scientific disciplines, but particularly on the behavioral and social sciences given the wealth of behavior and related constructs captured by big data sources. Second, science is itself a social enterprise; by applying principles from the social sciences to the conduct of research, it should be possible to ameliorate some of the systemic problems that plague the scientific enterprise in the age of big data. We explore the feasibility of recalibrating the basic mechanisms of the scientific enterprise so that they are more transparent and cumulative; more integrative and cohesive; and more rapid, relevant, and responsive. PMID:26294799

  15. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  16. A CADD-alog of strategies in pharma.

    PubMed

    Warr, Wendy A

    2017-03-01

    A special issue on computer-aided drug design (CADD) strategies in pharma discusses how CADD groups in different environments work. Perspectives were collected from authors in 11 organizations: four big pharmaceutical companies, one major biotechnology company, one smaller biotech, one private pharmaceutical company, two contract research organizations (CROs), one university, and one that spans the breadth of big pharmaceutical companies and one smaller biotech.

  17. A CADD-alog of strategies in pharma

    NASA Astrophysics Data System (ADS)

    Warr, Wendy A.

    2017-03-01

    A special issue on computer-aided drug design (CADD) strategies in pharma discusses how CADD groups in different environments work. Perspectives were collected from authors in 11 organizations: four big pharmaceutical companies, one major biotechnology company, one smaller biotech, one private pharmaceutical company, two contract research organizations (CROs), one university, and one that spans the breadth of big pharmaceutical companies and one smaller biotech.

  18. Broadening roles for FMRP: big news for big potassium (BK) channels.

    PubMed

    Contractor, Anis

    2013-02-20

    FMRP is an RNA-binding protein that negatively regulates translation and which is lost in fragile X syndrome. In this issue of Neuron, Deng et al. (2013) demonstrate a novel translation-independent function for FMRP as a regulator of presynaptic BK channels that modulate the dynamics of neurotransmitter release. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Big data and ergonomics methods: A new paradigm for tackling strategic transport safety risks.

    PubMed

    Walker, Guy; Strathie, Ailsa

    2016-03-01

    Big data collected from On-Train Data Recorders (OTDR) has the potential to address the most important strategic risks currently faced by rail operators and authorities worldwide. These risk issues are increasingly orientated around human performance and have proven resistant to existing approaches. This paper presents a number of proof of concept demonstrations to show that long standing ergonomics methods can be driven from big data, and succeed in providing insight into human performance in a novel way. Over 300 ergonomics methods were reviewed and a smaller sub-set selected for proof-of-concept development using real on-train recorder data. From this are derived nine candidate Human Factors Leading Indicators which map on to all of the psychological precursors of the identified risks. This approach has the potential to make use of a significantly underused source of data, and enable rail industry stakeholders to intervene sooner to address human performance issues that, via the methods presented in this paper, are clearly manifest in on-train data recordings. The intersection of psychological knowledge, ergonomics methods and big data creates an important new framework for driving new insights. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. Vertical landscraping, a big regionalism for Dubai.

    PubMed

    Wilson, Matthew

    2010-01-01

    Dubai's ecologic and economic complications are exacerbated by six years of accelerated expansion, a fixed top-down approach to urbanism and the construction of iconic single-phase mega-projects. With recent construction delays, project cancellations and growing landscape issues, Dubai's tower typologies have been unresponsive to changing environmental, socio-cultural and economic patterns (BBC, 2009; Gillet, 2009; Lewis, 2009). In this essay, a theory of "Big Regionalism" guides an argument for an economically and ecologically linked tower typology called the Condenser. This phased "box-to-tower" typology is part of a greater Landscape Urbanist strategy called Vertical Landscraping. Within this strategy, the Condenser's role is to densify the city, facilitating the creation of ecologic voids that order the urban region. Delineating "Big Regional" principles, the Condenser provides a time-based, global-local urban growth approach that weaves Bigness into a series of urban-regional, economic and ecological relationships, builds upon the environmental performance of the city's regional architecture and planning, promotes a continuity of Dubai's urban history, and responds to its landscape issues while condensing development. These speculations permit consideration of the overlooked opportunities embedded within Dubai's mega-projects and their long-term impact on the urban morphology.

  1. The dominance of big pharma: power.

    PubMed

    Edgar, Andrew

    2013-05-01

    The purpose of this paper is to provide a normative model for the assessment of the exercise of power by Big Pharma. By drawing on the work of Steven Lukes, it will be argued that while Big Pharma is overtly highly regulated, so that its power is indeed restricted in the interests of patients and the general public, the industry is still able to exercise what Lukes describes as a third dimension of power. This entails concealing the conflicts of interest and grievances that Big Pharma may have with the health care system, physicians and patients, crucially through rhetorical engagements with Patient Advocacy Groups that seek to shape public opinion, and also by marginalising certain groups, excluding them from debates over health care resource allocation. Three issues will be examined: the construction of a conception of the patient as expert patient or consumer; the phenomenon of disease mongering; the suppression or distortion of debates over resource allocation.

  2. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  3. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  4. Big data in food safety: An overview.

    PubMed

    Marvin, Hans J P; Janssen, Esmée M; Bouzembrak, Yamine; Hendriksen, Peter J M; Staats, Martijn

    2017-07-24

    Technology is now being developed that is able to handle vast amounts of structured and unstructured data from diverse sources and origins. These technologies are often referred to as big data, and open new areas of research and applications that will have an increasing impact in all sectors of our society. In this paper we assessed to which extent big data is being applied in the food safety domain and identified several promising trends. In several parts of the world, governments stimulate the publication on internet of all data generated in public funded research projects. This policy opens new opportunities for stakeholders dealing with food safety to address issues which were not possible before. Application of mobile phones as detection devices for food safety and the use of social media as early warning of food safety problems are a few examples of the new developments that are possible due to big data.

  5. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  6. The Relationship between Language Proficiency and Student Performance on a Locally Developed Mathematics Placement Exam at a Community College in Southern California

    ERIC Educational Resources Information Center

    Kasouha, Abeir

    2011-01-01

    With the open door enrollment polices that community colleges have, diversity is increasingly present in community colleges. This diversity includes ability levels, ethnicity, and English as a Second Language learners (ESL). This raises a big issue of having a big gap in academic preparation among students within the same classroom. To solve this…

  7. The relationship between place bonding and social trust, as explored in a study in the Big Thicket National Preserve, Texas

    Treesearch

    Christopher J. Wynveen; Gerard T. Kyle; Gene L. Theodori

    2009-01-01

    The management of feral hogs surrounding the Big Thicket National Preserve (BTNP) in southeastern Texas requires that National Park Service (NPS) staff and stakeholders manage resource issues collaboratively. Past research has indicated that place bonding can be the common ground upon which managers and stakeholders develop social trust with one another to form a basis...

  8. Observatories, think tanks, and community models in the hydrologic and environmental sciences: How does it affect me?

    NASA Astrophysics Data System (ADS)

    Torgersen, Thomas

    2006-06-01

    Multiple issues in hydrologic and environmental sciences are now squarely in the public focus and require both government and scientific study. Two facts also emerge: (1) The new approach being touted publicly for advancing the hydrologic and environmental sciences is the establishment of community-operated "big science" (observatories, think tanks, community models, and data repositories). (2) There have been important changes in the business of science over the last 20 years that make it important for the hydrologic and environmental sciences to demonstrate the "value" of public investment in hydrological and environmental science. Given that community-operated big science (observatories, think tanks, community models, and data repositories) could become operational, I argue that such big science should not mean a reduction in the importance of single-investigator science. Rather, specific linkages between the large-scale, team-built, community-operated big science and the single investigator should provide context data, observatory data, and systems models for a continuing stream of hypotheses by discipline-based, specialized research and a strong rationale for continued, single-PI ("discovery-based") research. I also argue that big science can be managed to provide a better means of demonstrating the value of public investment in the hydrologic and environmental sciences. Decisions regarding policy will still be political, but big science could provide an integration of the best scientific understanding as a guide for the best policy.

  9. A Call to Investigate the Relationship Between Education and Health Outcomes Using Big Data.

    PubMed

    Chahine, Saad; Kulasegaram, Kulamakan Mahan; Wright, Sarah; Monteiro, Sandra; Grierson, Lawrence E M; Barber, Cassandra; Sebok-Syer, Stefanie S; McConnell, Meghan; Yen, Wendy; De Champlain, Andre; Touchie, Claire

    2018-06-01

    There exists an assumption that improving medical education will improve patient care. While seemingly logical, this premise has rarely been investigated. In this Invited Commentary, the authors propose the use of big data to test this assumption. The authors present a few example research studies linking education and patient care outcomes and argue that using big data may more easily facilitate the process needed to investigate this assumption. The authors also propose that collaboration is needed to link educational and health care data. They then introduce a grassroots initiative, inclusive of universities in one Canadian province and national licensing organizations that are working together to collect, organize, link, and analyze big data to study the relationship between pedagogical approaches to medical training and patient care outcomes. While the authors acknowledge the possible challenges and issues associated with harnessing big data, they believe that the benefits supersede these. There is a need for medical education research to go beyond the outcomes of training to study practice and clinical outcomes as well. Without a coordinated effort to harness big data, policy makers, regulators, medical educators, and researchers are left with sometimes costly guesses and assumptions about what works and what does not. As the social, time, and financial investments in medical education continue to increase, it is imperative to understand the relationship between education and health outcomes.

  10. Clinical judgement in the era of big data and predictive analytics.

    PubMed

    Chin-Yee, Benjamin; Upshur, Ross

    2018-06-01

    Clinical judgement is a central and longstanding issue in the philosophy of medicine which has generated significant interest over the past few decades. In this article, we explore different approaches to clinical judgement articulated in the literature, focusing in particular on data-driven, mathematical approaches which we contrast with narrative, virtue-based approaches to clinical reasoning. We discuss the tension between these different clinical epistemologies and further explore the implications of big data and machine learning for a philosophy of clinical judgement. We argue for a pluralistic, integrative approach, and demonstrate how narrative, virtue-based clinical reasoning will remain indispensable in an era of big data and predictive analytics. © 2017 John Wiley & Sons, Ltd.

  11. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  12. From big data to deep insight in developmental science

    PubMed Central

    2016-01-01

    The use of the term ‘big data’ has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data ‘big’ and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. WIREs Cogn Sci 2016, 7:112–126. doi: 10.1002/wcs.1379 For further resources related to this article, please visit the WIREs website. PMID:26805777

  13. First Born amplitude for transitions from a circular state to a state of large (l, m)

    NASA Astrophysics Data System (ADS)

    Dewangan, D. P.

    2005-01-01

    The use of cylindrical polar coordinates instead of the conventional spherical polar coordinates enables us to derive compact expressions of the first Born amplitude for some selected sets of transitions from an arbitrary initial circular \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle state to a final \\big|\\psi_{n_f,l_f,m_f}\\big\\rangle state of large (lf, mf). The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-2}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-3}\\big\\rangle transitions are expressed in terms of the Jacobi polynomials which serve as suitable starting points for constructing complete solutions over the bound energy levels of hydrogen-like atoms. The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-2)}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-3)}\\big\\rangle transitions are in simple algebraic forms and are directly applicable to all possible values of ni and nf. It emerges that the method can be extended to evaluate the first Born amplitude for many other transitions involving states of large (l, m).

  14. The Big Bang of tissue growth: Apical cell constriction turns into tissue expansion.

    PubMed

    Janody, Florence

    2018-03-05

    How tissue growth is regulated during development and cancer is a fundamental question in biology. In this issue, Tsoumpekos et al. (2018. J. Cell Biol. https://doi.org/10.1083/jcb.201705104) and Forest et al. (2018. J. Cell Biol. https://doi.org/10.1083/jcb.201705107) identify Big bang (Bbg) as an important growth regulator of the Drosophila melanogaster wing imaginal disc. © 2018 Janody.

  15. Fitting ERGMs on big networks.

    PubMed

    An, Weihua

    2016-09-01

    The exponential random graph model (ERGM) has become a valuable tool for modeling social networks. In particular, ERGM provides great flexibility to account for both covariates effects on tie formations and endogenous network formation processes. However, there are both conceptual and computational issues for fitting ERGMs on big networks. This paper describes a framework and a series of methods (based on existent algorithms) to address these issues. It also outlines the advantages and disadvantages of the methods and the conditions to which they are most applicable. Selected methods are illustrated through examples. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Big Data: More than Just Big and More than Just Data.

    PubMed

    Spencer, Gregory A

    2017-01-01

    According to an report, 90 percent of the data in the world today were created in the past two years. This statistic is not surprising given the explosion of mobile phones and other devices that generate data, the Internet of Things (e.g., smart refrigerators), and metadata (data about data). While it might be a stretch to figure out how a healthcare organization can use data generated from an ice maker, data from a plethora of rich and useful sources, when combined with an organization's own data, can produce improved results. How can healthcare organizations leverage these rich and diverse data sources to improve patients' health and make their businesses more competitive? The authors of the two feature articles in this issue of Frontiers provide tangible examples of how their organizations are using big data to meaningfully improve healthcare. Sentara Healthcare and Carolinas HealthCare System both use big data in creative ways that differ because of different business situations, yet are also similar in certain respects.

  17. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  18. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts.

    PubMed

    Mittelstadt, Brent Daniel; Floridi, Luciano

    2016-04-01

    The capacity to collect and analyse data is growing exponentially. Referred to as 'Big Data', this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymisation and data protection), (3) ownership, (4) epistemology and objectivity, and (5) 'Big Data Divides' created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: (6) the dangers of ignoring group-level ethical harms; (7) the importance of epistemology in assessing the ethics of Big Data; (8) the changing nature of fiduciary relationships that become increasingly data saturated; (9) the need to distinguish between 'academic' and 'commercial' Big Data practices in terms of potential harm to data subjects; (10) future problems with ownership of intellectual property generated from analysis of aggregated datasets; and (11) the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices.

  19. Big Opportunities and Big Concerns of Big Data in Education

    ERIC Educational Resources Information Center

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  20. Integrity, standards, and QC-related issues with big data in pre-clinical drug discovery.

    PubMed

    Brothers, John F; Ung, Matthew; Escalante-Chong, Renan; Ross, Jermaine; Zhang, Jenny; Cha, Yoonjeong; Lysaght, Andrew; Funt, Jason; Kusko, Rebecca

    2018-06-01

    The tremendous expansion of data analytics and public and private big datasets presents an important opportunity for pre-clinical drug discovery and development. In the field of life sciences, the growth of genetic, genomic, transcriptomic and proteomic data is partly driven by a rapid decline in experimental costs as biotechnology improves throughput, scalability, and speed. Yet far too many researchers tend to underestimate the challenges and consequences involving data integrity and quality standards. Given the effect of data integrity on scientific interpretation, these issues have significant implications during preclinical drug development. We describe standardized approaches for maximizing the utility of publicly available or privately generated biological data and address some of the common pitfalls. We also discuss the increasing interest to integrate and interpret cross-platform data. Principles outlined here should serve as a useful broad guide for existing analytical practices and pipelines and as a tool for developing additional insights into therapeutics using big data. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Big Sib Students' Perceptions of the Educational Environment at the School of Medical Sciences, Universiti Sains Malaysia, using Dundee Ready Educational Environment Measure (DREEM) Inventory.

    PubMed

    Arzuman, Hafiza; Yusoff, Muhamad Saiful Bahri; Chit, Som Phong

    2010-07-01

    A cross-sectional descriptive study was conducted among Big Sib students to explore their perceptions of the educational environment at the School of Medical Sciences, Universiti Sains Malaysia (USM) and its weak areas using the Dundee Ready Educational Environment Measure (DREEM) inventory. The DREEM inventory is a validated global instrument for measuring educational environments in undergraduate medical and health professional education. The English version of the DREEM inventory was administered to all Year 2 Big Sib students (n = 67) at a regular Big Sib session. The purpose of the study as well as confidentiality and ethical issues were explained to the students before the questionnaire was administered. The response rate was 62.7% (42 out of 67 students). The overall DREEM score was 117.9/200 (SD 14.6). The DREEM indicated that the Big Sib students' perception of educational environment of the medical school was more positive than negative. Nevertheless, the study also revealed some problem areas within the educational environment. This pilot study revealed that Big Sib students perceived a positive learning environment at the School of Medical Sciences, USM. It also identified some low-scored areas that require further exploration to pinpoint the exact problems. The relatively small study population selected from a particular group of students was the major limitation of the study. This small sample size also means that the study findings cannot be generalised.

  2. Increased plasma levels of big-endothelin-2 and big-endothelin-3 in patients with end-stage renal disease.

    PubMed

    Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi

    2012-10-15

    Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies.

    PubMed

    de Brevern, Alexandre G; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries.

  4. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies

    PubMed Central

    de Brevern, Alexandre G.; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries. PMID:26125026

  5. Precision Health Economics and Outcomes Research to Support Precision Medicine: Big Data Meets Patient Heterogeneity on the Road to Value.

    PubMed

    Chen, Yixi; Guzauskas, Gregory F; Gu, Chengming; Wang, Bruce C M; Furnback, Wesley E; Xie, Guotong; Dong, Peng; Garrison, Louis P

    2016-11-02

    The "big data" era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient-level HEOR analyses. We propose the concept of "precision HEOR", which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient.

  6. Precision Health Economics and Outcomes Research to Support Precision Medicine: Big Data Meets Patient Heterogeneity on the Road to Value

    PubMed Central

    Chen, Yixi; Guzauskas, Gregory F.; Gu, Chengming; Wang, Bruce C. M.; Furnback, Wesley E.; Xie, Guotong; Dong, Peng; Garrison, Louis P.

    2016-01-01

    The “big data” era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient–level HEOR analyses. We propose the concept of “precision HEOR”, which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient. PMID:27827859

  7. Survey of Cyber Crime in Big Data

    NASA Astrophysics Data System (ADS)

    Rajeswari, C.; Soni, Krishna; Tandon, Rajat

    2017-11-01

    Big data is like performing computation operations and database operations for large amounts of data, automatically from the data possessor’s business. Since a critical strategic offer of big data access to information from numerous and various areas, security and protection will assume an imperative part in big data research and innovation. The limits of standard IT security practices are notable, with the goal that they can utilize programming sending to utilize programming designers to incorporate pernicious programming in a genuine and developing risk in applications and working frameworks, which are troublesome. The impact gets speedier than big data. In this way, one central issue is that security and protection innovation are sufficient to share controlled affirmation for countless direct get to. For powerful utilization of extensive information, it should be approved to get to the information of that space or whatever other area from a space. For a long time, dependable framework improvement has arranged a rich arrangement of demonstrated ideas of demonstrated security to bargain to a great extent with the decided adversaries, however this procedure has been to a great extent underestimated as “needless excess” and sellers In this discourse, essential talks will be examined for substantial information to exploit this develop security and protection innovation, while the rest of the exploration difficulties will be investigated.

  8. Big Data: Implications for Health System Pharmacy

    PubMed Central

    Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.

    2016-01-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services. PMID:27559194

  9. Big Data: Implications for Health System Pharmacy.

    PubMed

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  10. Transforming Big Data into cancer-relevant insight: An initial, multi-tier approach to assess reproducibility and relevance* | Office of Cancer Genomics

    Cancer.gov

    The Cancer Target Discovery and Development (CTD^2) Network was established to accelerate the transformation of "Big Data" into novel pharmacological targets, lead compounds, and biomarkers for rapid translation into improved patient outcomes. It rapidly became clear in this collaborative network that a key central issue was to define what constitutes sufficient computational or experimental evidence to support a biologically or clinically relevant finding.

  11. Cardiotoxicity of the new cancer therapeutics- mechanisms of, and approaches to, the problem

    PubMed Central

    Force, Thomas; Kerkelä, Risto

    2009-01-01

    Cardiotoxicity of some targeted therapeutics, including monoclonal antibodies and small molecule inhibitors, is a reality. Herein we will examine why it occurs, focusing on molecular mechanisms to better understand the issue. We will also examine how big the problem is and, more importantly, how big it may become in the future. We will review models for detecting cardiotoxicity in the pre-clinical phase. We will also focus on two key areas that drive cardiotoxicity- multi-targeting and the inherent lack of selectivity of ATP-competitive antagonists. Finally, we will examine the issue of reversibility and discuss possible approaches to keeping patients on therapy. PMID:18617014

  12. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    ERIC Educational Resources Information Center

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  13. Bird habitat relationships along a Great Basin elevational gradient

    Treesearch

    Dean E. Medin; Bruce L. Welch; Warren P. Clary

    2000-01-01

    Bird censuses were taken on 11 study plots along an elevational gradient ranging from 5,250 to 11,400 feet. Each plot represented a different vegetative type or zone: shadscale, shadscale-Wyoming big sagebrush, Wyoming big sagebrush, Wyoming big sagebrush-pinyon/juniper, pinyon/juniper, pinyon/juniper-mountain big sagebrush, mountain big sagebrush, mountain big...

  14. Wisdom within: unlocking the potential of big data for nursing regulators.

    PubMed

    Blumer, L; Giblin, C; Lemermeyer, G; Kwan, J A

    2017-03-01

    This paper explores the potential for incorporating big data in nursing regulators' decision-making and policy development. Big data, commonly described as the extensive volume of information that individuals and agencies generate daily, is a concept familiar to the business community but is only beginning to be explored by the public sector. Using insights gained from a recent research project, the College and Association of Registered Nurses of Alberta, in Canada is creating an organizational culture of data-driven decision-making throughout its regulatory and professional functions. The goal is to enable the organization to respond quickly and profoundly to nursing issues in a rapidly changing healthcare environment. The evidence includes a review of the Learning from Experience: Improving the Process of Internationally Educated Nurses' Applications for Registration (LFE) research project (2011-2016), combined with a literature review on data-driven decision-making within nursing and healthcare settings, and the incorporation of big data in the private and public sectors, primarily in North America. This paper discusses experience and, more broadly, how data can enhance the rigour and integrity of nursing and health policy. Nursing regulatory bodies have access to extensive data, and the opportunity to use these data to inform decision-making and policy development by investing in how it is captured, analysed and incorporated into decision-making processes. Understanding and using big data is a critical part of developing relevant, sound and credible policy. Rigorous collection and analysis of big data supports the integrity of the evidence used by nurse regulators in developing nursing and health policy. © 2016 International Council of Nurses.

  15. A New MI-Based Visualization Aided Validation Index for Mining Big Longitudinal Web Trial Data

    PubMed Central

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-01-01

    Web-delivered clinical trials generate big complex data. To help untangle the heterogeneity of treatment effects, unsupervised learning methods have been widely applied. However, identifying valid patterns is a priority but challenging issue for these methods. This paper, built upon our previous research on multiple imputation (MI)-based fuzzy clustering and validation, proposes a new MI-based Visualization-aided validation index (MIVOOS) to determine the optimal number of clusters for big incomplete longitudinal Web-trial data with inflated zeros. Different from a recently developed fuzzy clustering validation index, MIVOOS uses a more suitable overlap and separation measures for Web-trial data but does not depend on the choice of fuzzifiers as the widely used Xie and Beni (XB) index. Through optimizing the view angles of 3-D projections using Sammon mapping, the optimal 2-D projection-guided MIVOOS is obtained to better visualize and verify the patterns in conjunction with trajectory patterns. Compared with XB and VOS, our newly proposed MIVOOS shows its robustness in validating big Web-trial data under different missing data mechanisms using real and simulated Web-trial data. PMID:27482473

  16. Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets.

    PubMed

    Paxton, Alexandra; Griffiths, Thomas L

    2017-10-01

    Today, people generate and store more data than ever before as they interact with both real and virtual environments. These digital traces of behavior and cognition offer cognitive scientists and psychologists an unprecedented opportunity to test theories outside the laboratory. Despite general excitement about big data and naturally occurring datasets among researchers, three "gaps" stand in the way of their wider adoption in theory-driven research: the imagination gap, the skills gap, and the culture gap. We outline an approach to bridging these three gaps while respecting our responsibilities to the public as participants in and consumers of the resulting research. To that end, we introduce Data on the Mind ( http://www.dataonthemind.org ), a community-focused initiative aimed at meeting the unprecedented challenges and opportunities of theory-driven research with big data and naturally occurring datasets. We argue that big data and naturally occurring datasets are most powerfully used to supplement-not supplant-traditional experimental paradigms in order to understand human behavior and cognition, and we highlight emerging ethical issues related to the collection, sharing, and use of these powerful datasets.

  17. The dynamics of big data and human rights: the case of scientific research.

    PubMed

    Vayena, Effy; Tasioulas, John

    2016-12-28

    In this paper, we address the complex relationship between big data and human rights. Because this is a vast terrain, we restrict our focus in two main ways. First, we concentrate on big data applications in scientific research, mostly health-related research. And, second, we concentrate on two human rights: the familiar right to privacy and the less well-known right to science. Our contention is that human rights interact in potentially complex ways with big data, not only constraining it, but also enabling it in various ways; and that such rights are dynamic in character, rather than fixed once and for all, changing in their implications over time in line with changes in the context we inhabit, and also as they interact among themselves in jointly responding to the opportunities and risks thrown up by a changing world. Understanding this dynamic interaction of human rights is crucial for formulating an ethic tailored to the realities-the new capabilities and risks-of the rapidly evolving digital environment.This article is part of the themed issue 'The ethical impact of data science'. © 2016 The Author(s).

  18. The dynamics of big data and human rights: the case of scientific research

    PubMed Central

    Tasioulas, John

    2016-01-01

    In this paper, we address the complex relationship between big data and human rights. Because this is a vast terrain, we restrict our focus in two main ways. First, we concentrate on big data applications in scientific research, mostly health-related research. And, second, we concentrate on two human rights: the familiar right to privacy and the less well-known right to science. Our contention is that human rights interact in potentially complex ways with big data, not only constraining it, but also enabling it in various ways; and that such rights are dynamic in character, rather than fixed once and for all, changing in their implications over time in line with changes in the context we inhabit, and also as they interact among themselves in jointly responding to the opportunities and risks thrown up by a changing world. Understanding this dynamic interaction of human rights is crucial for formulating an ethic tailored to the realities—the new capabilities and risks—of the rapidly evolving digital environment. This article is part of the themed issue ‘The ethical impact of data science’. PMID:28336802

  19. Seeding considerations in restoring big sagebrush habitat

    Treesearch

    Scott M. Lambert

    2005-01-01

    This paper describes methods of managing or seeding to restore big sagebrush communities for wildlife habitat. The focus is on three big sagebrush subspecies, Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis), basin big sagebrush (Artemisia tridentata ssp. tridentata), and mountain...

  20. Making sense of metacommunities: dispelling the mythology of a metacommunity typology.

    PubMed

    Brown, Bryan L; Sokol, Eric R; Skelton, James; Tornwall, Brett

    2017-03-01

    Metacommunity ecology has rapidly become a dominant framework through which ecologists understand the natural world. Unfortunately, persistent misunderstandings regarding metacommunity theory and the methods for evaluating hypotheses based on the theory are common in the ecological literature. Since its beginnings, four major paradigms-species sorting, mass effects, neutrality, and patch dynamics-have been associated with metacommunity ecology. The Big 4 have been misconstrued to represent the complete set of metacommunity dynamics. As a result, many investigators attempt to evaluate community assembly processes as strictly belonging to one of the Big 4 types, rather than embracing the full scope of metacommunity theory. The Big 4 were never intended to represent the entire spectrum of metacommunity dynamics and were rather examples of historical paradigms that fit within the new framework. We argue that perpetuation of the Big 4 typology hurts community ecology and we encourage researchers to embrace the full inference space of metacommunity theory. A related, but distinct issue is that the technique of variation partitioning is often used to evaluate the dynamics of metacommunities. This methodology has produced its own set of misunderstandings, some of which are directly a product of the Big 4 typology and others which are simply the product of poor study design or statistical artefacts. However, variation partitioning is a potentially powerful technique when used appropriately and we identify several strategies for successful utilization of variation partitioning.

  1. 76 FR 54415 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-01

    ... following flooding sources: Bear Creek (backwater effects from Cumberland River), Big Renox Creek (backwater effects from Cumberland River), Big Whetstone Creek (backwater effects from Cumberland River), Big Willis... River), Big Renox Creek (backwater effects from Cumberland River), Big Whetstone Creek (backwater...

  2. A glossary for big data in population and public health: discussion and commentary on terminology and research methods.

    PubMed

    Fuller, Daniel; Buote, Richard; Stanley, Kevin

    2017-11-01

    The volume and velocity of data are growing rapidly and big data analytics are being applied to these data in many fields. Population and public health researchers may be unfamiliar with the terminology and statistical methods used in big data. This creates a barrier to the application of big data analytics. The purpose of this glossary is to define terms used in big data and big data analytics and to contextualise these terms. We define the five Vs of big data and provide definitions and distinctions for data mining, machine learning and deep learning, among other terms. We provide key distinctions between big data and statistical analysis methods applied to big data. We contextualise the glossary by providing examples where big data analysis methods have been applied to population and public health research problems and provide brief guidance on how to learn big data analysis methods. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Big defensins, a diverse family of antimicrobial peptides that follows different patterns of expression in hemocytes of the oyster Crassostrea gigas.

    PubMed

    Rosa, Rafael D; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne

    2011-01-01

    Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. We provide here the first report showing that big defensins form a family of antimicrobial peptides diverse not only in terms of sequences but also in terms of genomic organization and regulation of gene expression.

  4. Big Defensins, a Diverse Family of Antimicrobial Peptides That Follows Different Patterns of Expression in Hemocytes of the Oyster Crassostrea gigas

    PubMed Central

    Rosa, Rafael D.; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne

    2011-01-01

    Background Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Findings Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. Conclusions We provide here the first report showing that big defensins form a family of antimicrobial peptides diverse not only in terms of sequences but also in terms of genomic organization and regulation of gene expression. PMID:21980497

  5. Protecting Your Patients' Interests in the Era of Big Data, Artificial Intelligence, and Predictive Analytics.

    PubMed

    Balthazar, Patricia; Harri, Peter; Prater, Adam; Safdar, Nabile M

    2018-03-01

    The Hippocratic oath and the Belmont report articulate foundational principles for how physicians interact with patients and research subjects. The increasing use of big data and artificial intelligence techniques demands a re-examination of these principles in light of the potential issues surrounding privacy, confidentiality, data ownership, informed consent, epistemology, and inequities. Patients have strong opinions about these issues. Radiologists have a fiduciary responsibility to protect the interest of their patients. As such, the community of radiology leaders, ethicists, and informaticists must have a conversation about the appropriate way to deal with these issues and help lead the way in developing capabilities in the most just, ethical manner possible. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. BIG DATA AND STATISTICS

    PubMed Central

    Rossell, David

    2016-01-01

    Big Data brings unprecedented power to address scientific, economic and societal issues, but also amplifies the possibility of certain pitfalls. These include using purely data-driven approaches that disregard understanding the phenomenon under study, aiming at a dynamically moving target, ignoring critical data collection issues, summarizing or preprocessing the data inadequately and mistaking noise for signal. We review some success stories and illustrate how statistical principles can help obtain more reliable information from data. We also touch upon current challenges that require active methodological research, such as strategies for efficient computation, integration of heterogeneous data, extending the underlying theory to increasingly complex questions and, perhaps most importantly, training a new generation of scientists to develop and deploy these strategies. PMID:27722040

  8. Big data processing in the cloud - Challenges and platforms

    NASA Astrophysics Data System (ADS)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  9. Big data challenges for large radio arrays

    NASA Astrophysics Data System (ADS)

    Jones, D. L.; Wagstaff, K.; Thompson, D. R.; D'Addario, L.; Navarro, R.; Mattmann, C.; Majid, W.; Lazio, J.; Preston, J.; Rebbapragada, U.

    2012-03-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields. The Jet Propulsion Laboratory is developing technologies to address big data issues, with an emphasis in three areas: 1) Lower-power digital processing architectures to make highvolume data generation operationally affordable, 2) Date-adaptive machine learning algorithms for real-time analysis (or "data triage") of large data volumes, and 3) Scalable data archive systems that allow efficient data mining and remote user code to run locally where the data are stored.

  10. Quantization of Big Bang in Crypto-Hermitian Heisenberg Picture

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav

    A background-independent quantization of the Universe near its Big Bang singularity is considered using a drastically simplified toy model. Several conceptual issues are addressed. (1) The observable spatial-geometry characteristics of our empty-space expanding Universe is sampled by the time-dependent operator $Q=Q(t)$ of the distance between two space-attached observers (``Alice and Bob''). (2) For any pre-selected guess of the simple, non-covariant time-dependent observable $Q(t)$ one of the Kato's exceptional points (viz., $t=\\tau_{(EP)}$) is postulated {\\em real-valued}. This enables us to treat it as the time of Big Bang. (3) During our ``Eon'' (i.e., at all $t>\\tau_{(EP)}$) the observability status of operator $Q(t)$ is mathematically guaranteed by its self-adjoint nature with respect to an {\\em ad hoc} Hilbert-space metric $\\Theta(t) \

  11. Advances in Risk Analysis with Big Data.

    PubMed

    Choi, Tsan-Ming; Lambert, James H

    2017-08-01

    With cloud computing, Internet-of-things, wireless sensors, social media, fast storage and retrieval, etc., organizations and enterprises have access to unprecedented amounts and varieties of data. Current risk analysis methodology and applications are experiencing related advances and breakthroughs. For example, highway operations data are readily available, and making use of them reduces risks of traffic crashes and travel delays. Massive data of financial and enterprise systems support decision making under risk by individuals, industries, regulators, etc. In this introductory article, we first discuss the meaning of big data for risk analysis. We then examine recent advances in risk analysis with big data in several topic areas. For each area, we identify and introduce the relevant articles that are featured in the special issue. We conclude with a discussion on future research opportunities. © 2017 Society for Risk Analysis.

  12. The Big Bang Theory and the Nature of Science

    NASA Astrophysics Data System (ADS)

    Arthury, Luiz Henrique Martins; Peduzzi, Luiz O. Q.

    2015-12-01

    Modern cosmology was constituted, throughout the twentieth century to the present days, as a very productive field of research, resulting in major discoveries that attest to its explanatory power. The Big Bang Theory, the generic and popular name of the standard model of cosmology, is probably the most daring research program of physics and astronomy, trying to recreate the evolution of our observable universe. But contrary to what you might think, its conjectures are of a degree of refinement and corroborative evidence that make it our best explanation for the history of our cosmos. The Big Bang Theory is also an excellent field to discuss issues regarding the scientific activity itself. In this paper we discuss the main elements of this theory with an epistemological look, resulting in a text quite useful to work on educational activities with related goals.

  13. Conservation: Toward firmer ground

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The following aspects of energy conservation were discussed: conservation history and goals, conservation modes, conservation accounting-criteria, and a method to overcome obstacles. The conservation modes tested fall into one of the following categories: reduced energy consumption, increased efficiency of energy utilization, or substitution of one or more forms of energy for another which is in shorter supply or in some sense thought to be of more value. The conservation accounting criteria include net energy reduction, economic, and technical criteria. A method to overcome obstacles includes (approaches such as: direct personal impact (life style, income, security, aspiration), an element of crisis, large scale involvement of environmental, safety, and health issues, connections to big government, big business, big politics, involvement of known and speculative science and technology, appeal to moral and ethical standards, the transient nature of opportunities to correct the system.

  14. Technical challenges for big data in biomedicine and health: data sources, infrastructure, and analytics.

    PubMed

    Peek, N; Holmes, J H; Sun, J

    2014-08-15

    To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

  15. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    PubMed

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  16. BIG1, a brefeldin A-inhibited guanine nucleotide-exchange protein regulates neurite development via PI3K-AKT and ERK signaling pathways.

    PubMed

    Zhou, C; Li, C; Li, D; Wang, Y; Shao, W; You, Y; Peng, J; Zhang, X; Lu, L; Shen, X

    2013-12-19

    The elongation of neuron is highly dependent on membrane trafficking. Brefeldin A (BFA)-inhibited guanine nucleotide-exchange protein 1 (BIG1) functions in the membrane trafficking between the Golgi apparatus and the plasma membrane. BFA, an uncompetitive inhibitor of BIG1 can inhibit neurite outgrowth and polarity development. In this study, we aimed to define the possible role of BIG1 in neurite development and to further investigate the potential mechanism. By immunostaining, we found that BIG1 was extensively colocalized with synaptophysin, a marker for synaptic vesicles in soma and partly in neurites. The amount of both protein and mRNA of BIG1 were up-regulated during rat brain development. BIG1 depletion significantly decreased the neurite length and inhibited the phosphorylation of phosphatidylinositide 3-kinase (PI3K) and protein kinase B (AKT). Inhibition of BIG1 guanine nucleotide-exchange factor (GEF) activity by BFA or overexpression of the dominant-negative BIG1 reduced PI3K and AKT phosphorylation, indicating regulatory effects of BIG1 on PI3K-AKT signaling pathway is dependent on its GEF activity. BIG1 siRNA or BFA treatment also significantly reduced extracellular signal-regulated kinase (ERK) phosphorylation. Overexpression of wild-type BIG1 significantly increased ERK phosphorylation, but the dominant-negative BIG1 had no effect on ERK phosphorylation, indicating the involvement of BIG1 in ERK signaling regulation may not be dependent on its GEF activity. Our result identified a novel function of BIG1 in neurite development. The newly recognized function integrates the function of BIG1 in membrane trafficking with the activation of PI3K-AKT and ERK signaling pathways which are critical in neurite development. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Big Five Personality Traits and Eating Attitudes in Intensively Training Dancers: The Mediating Role of Internalized Thinness Norms.

    PubMed

    Scoffier-Mériaux, Stéphanie; Falzon, Charlène; Lewton-Brain, Peter; Filaire, Edith; d'Arripe-Longueville, Fabienne

    2015-09-01

    Dancers are at high risk of developing disordered eating attitudes, notably because of internalized thinness norms. Although the big five personality traits have been shown to be associated with eating attitudes in daily life, in dancers where eating issues and thinness norms internalization could be salient little is known about these associations and the role of the internalization of thinness norms in this relationship. The main objectives of this study were thus to examine the relationships between the personality traits defined in the big five model and the self-regulation of eating attitudes, and to assess the role of internalized thinness norms in this association. The study included 180 intensively training dancers with an average age of 15.6 years (SD = 2.8). Dancers completed questionnaires measuring the big five personality traits, internalization of thinness norms and self-regulation of eating attitudes in sport. Bootstrapped mediation analyses showed that neuroticism was negatively associated with self-regulation of eating attitudes, both directly and indirectly through the mediating role of internalized thinness norms. This study suggested that: (a) neuroticism is a vulnerability factor for self-regulation of eating attitudes in dancers, as already evidenced in the general population, and (b) the internalization of thinness norms is a pathway through which neuroticism affects self-regulation of eating attitudes. The big five model is therefore partially related to the internalization of thinness norms and eating attitudes in dancers. Key pointsThe big five model relates to the internalization of thinness norms and eating attitudes in dancers.Neuroticism is negatively related to the self-regulation of eating attitudes.The internalization of thinness norms is correlated to the relationship between neuroticism and self-regulation of eating attitudes.

  18. Big data need big theory too

    PubMed Central

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  19. Big Five Personality Traits and Eating Attitudes in Intensively Training Dancers: The Mediating Role of Internalized Thinness Norms

    PubMed Central

    Scoffier-Mériaux, Stéphanie; Falzon, Charlène; Lewton-Brain, Peter; Filaire, Edith; d’Arripe-Longueville, Fabienne

    2015-01-01

    Dancers are at high risk of developing disordered eating attitudes, notably because of internalized thinness norms. Although the big five personality traits have been shown to be associated with eating attitudes in daily life, in dancers where eating issues and thinness norms internalization could be salient little is known about these associations and the role of the internalization of thinness norms in this relationship. The main objectives of this study were thus to examine the relationships between the personality traits defined in the big five model and the self-regulation of eating attitudes, and to assess the role of internalized thinness norms in this association. The study included 180 intensively training dancers with an average age of 15.6 years (SD = 2.8). Dancers completed questionnaires measuring the big five personality traits, internalization of thinness norms and self-regulation of eating attitudes in sport. Bootstrapped mediation analyses showed that neuroticism was negatively associated with self-regulation of eating attitudes, both directly and indirectly through the mediating role of internalized thinness norms. This study suggested that: (a) neuroticism is a vulnerability factor for self-regulation of eating attitudes in dancers, as already evidenced in the general population, and (b) the internalization of thinness norms is a pathway through which neuroticism affects self-regulation of eating attitudes. The big five model is therefore partially related to the internalization of thinness norms and eating attitudes in dancers. Key points The big five model relates to the internalization of thinness norms and eating attitudes in dancers. Neuroticism is negatively related to the self-regulation of eating attitudes. The internalization of thinness norms is correlated to the relationship between neuroticism and self-regulation of eating attitudes PMID:26336350

  20. Big data need big theory too.

    PubMed

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  1. Implementation of a Big Data Accessing and Processing Platform for Medical Records in Cloud.

    PubMed

    Yang, Chao-Tung; Liu, Jung-Chun; Chen, Shuo-Tsung; Lu, Hsin-Wen

    2017-08-18

    Big Data analysis has become a key factor of being innovative and competitive. Along with population growth worldwide and the trend aging of population in developed countries, the rate of the national medical care usage has been increasing. Due to the fact that individual medical data are usually scattered in different institutions and their data formats are varied, to integrate those data that continue increasing is challenging. In order to have scalable load capacity for these data platforms, we must build them in good platform architecture. Some issues must be considered in order to use the cloud computing to quickly integrate big medical data into database for easy analyzing, searching, and filtering big data to obtain valuable information.This work builds a cloud storage system with HBase of Hadoop for storing and analyzing big data of medical records and improves the performance of importing data into database. The data of medical records are stored in HBase database platform for big data analysis. This system performs distributed computing on medical records data processing through Hadoop MapReduce programming, and to provide functions, including keyword search, data filtering, and basic statistics for HBase database. This system uses the Put with the single-threaded method and the CompleteBulkload mechanism to import medical data. From the experimental results, we find that when the file size is less than 300MB, the Put with single-threaded method is used and when the file size is larger than 300MB, the CompleteBulkload mechanism is used to improve the performance of data import into database. This system provides a web interface that allows users to search data, filter out meaningful information through the web, and analyze and convert data in suitable forms that will be helpful for medical staff and institutions.

  2. Measuring the Promise of Big Data Syllabi

    ERIC Educational Resources Information Center

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  3. The effect of big endothelin-1 in the proximal tubule of the rat kidney

    PubMed Central

    Beara-Lasić, Lada; Knotek, Mladen; Čejvan, Kenan; Jakšić, Ozren; Lasić, Zoran; Skorić, Boško; Brkljačić, Vera; Banfić, Hrvoje

    1997-01-01

    An obligatory step in the biosynthesis of endothelin-1 (ET-1) is the conversion of its inactive precursor, big ET-1, into the mature form by the action of specific, phosphoramidon-sensitive, endothelin converting enzyme(s) (ECE). Disparate effects of big ET-1 and ET-1 on renal tubule function suggest that big ET-1 might directly influence renal tubule function. Therefore, the role of the enzymatic conversion of big ET-1 into ET-1 in eliciting the functional response (generation of 1,2-diacylglycerol) to big ET-1 was studied in the rat proximal tubules.In renal cortical slices incubated with big ET-1, pretreatment with phosphoramidon (an ECE inhibitor) reduced tissue immunoreactive ET-1 to a level similar to that of cortical tissue not exposed to big ET-1. This confirms the presence and effectiveness of ECE inhibition by phosphoramidon.In freshly isolated proximal tubule cells, big ET-1 stimulated the generation of 1,2-diacylglycerol (DAG) in a time- and dose-dependent manner. Neither phosphoramidon nor chymostatin, a chymase inhibitor, influenced the generation of DAG evoked by big ET-1.Big ET-1-dependent synthesis of DAG was found in the brush-border membrane. It was unaffected by BQ123, an ETA receptor antagonist, but was blocked by bosentan, an ETA,B-nonselective endothelin receptor antagonist.These results suggest that the proximal tubule is a site for the direct effect of big ET-1 in the rat kidney. The effect of big ET-1 is confined to the brush-border membrane of the proximal tubule, which may be the site of big ET-1-sensitive receptors. PMID:9051300

  4. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    PubMed

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Cryptography for Big Data Security

    DTIC Science & Technology

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  6. Health Informatics Scientists' Perception About Big Data Technology.

    PubMed

    Minou, John; Routsis, Fotios; Gallos, Parisis; Mantas, John

    2017-01-01

    The aim of this paper is to present the perceptions of the Health Informatics Scientists about the Big Data Technology in Healthcare. An empirical study was conducted among 46 scientists to assess their knowledge about the Big Data Technology and their perceptions about using this technology in healthcare. Based on the study findings, 86.7% of the scientists had knowledge of Big data Technology. Furthermore, 59.1% of the scientists believed that Big Data Technology refers to structured data. Additionally, 100% of the population believed that Big Data Technology can be implemented in Healthcare. Finally, the majority does not know any cases of use of Big Data Technology in Greece while 57,8% of the them mentioned that they knew use cases of the Big Data Technology abroad.

  7. Sandwich-type enzyme immunoassay for big endothelin-I in plasma: concentrations in healthy human subjects unaffected by sex or posture.

    PubMed

    Aubin, P; Le Brun, G; Moldovan, F; Villette, J M; Créminon, C; Dumas, J; Homyrda, L; Soliman, H; Azizi, M; Fiet, J

    1997-01-01

    A sandwich-type enzyme immunoassay has been developed for measuring human big endothelin-1 (big ET-1) in human plasma and supernatant fluids from human cell cultures. Big ET-1 is the precursor of endothelin 1 (ET-1), the most potent vasoconstrictor known. A rabbit antibody raised against the big ET-1 COOH-terminus fragment was used as an immobilized antibody (anti-P16). The Fab' fragment of a monoclonal antibody (1B3) raised against the ET-1 loop fragment was used as the enzyme-labeled antibody, after being coupled to acetylcholinesterase. The lowest detectable value in the assay was 1.2 pg/mL (0.12 pg/well). The assay was highly specific for big ET-1, demonstrating no cross-reactivity with ET-1, <0.4% cross-reactivity with big endothelin-2 (big ET-2), and <0.1% with big endothelin-3 (big ET-3). We used this assay to evaluate the effect of two different postural positions (supine and standing) on plasma big ET-1 concentrations in 11 male and 11 female healthy subjects. Data analysis revealed that neither sex nor body position influenced plasma big ET-1 concentrations. This assay should thus permit the detection of possible variations in plasma concentrations of big ET-1 in certain pathologies and, in association with ET-1 assay, make possible in vitro study of endothelin-converting enzyme activity in cell models. Such studies could clarify the physiological and clinical roles of this family of peptides.

  8. I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.

    Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less

  9. Upgrade of the Cherenkov Detector of the JLab Hall A BigBite Spectrometer

    NASA Astrophysics Data System (ADS)

    Nycz, Michael

    2015-04-01

    The BigBite Spectrometer of the Hall A Facility of Jefferson Lab will be used in the upcoming MARATHON experiment at Jefferson Lab to measure the ratio of neutron to proton F2 inelastic structure functions and the ratio of up to down, d/u, quark nucleon distributions at medium and large values of Bjorken x. In preparation for this experiment, the BigBite Cherenkov detector is being modified to increase its overall efficiency for detecting electrons. This large volume counter is based on a dual system of segmented mirrors reflecting Cherenkov radiation to twenty photomultipliers. In this talk, a description of the detector and its past performance will be presented, along with the motivations for improvements and their implementation. An update on the status of the rest of the BigBite detector package, will be also presented. Additionally, current issues related to obtaining C4 F8 O, the commonly used radiator gas, which has been phased out of production by U.S. gas producers, will be discussed. This work is supported by Kent State University, NSF Grant PHY-1405814, and DOE Contract DE-AC05-06OR23177.

  10. Application and Prospect of Big Data in Water Resources

    NASA Astrophysics Data System (ADS)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  11. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    ERIC Educational Resources Information Center

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  12. 76 FR 1629 - Public Land Order No. 7757; Withdrawal of National Forest System Land for the Big Ice Cave; Montana

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-11

    ... United States Forest Service to protect the Big Ice Cave, its subterranean water supply, and Federal... to protect the Big Ice Cave, its subterranean water supply, and Federal improvements. The Big Ice... protect the Big Ice Cave, its subterranean water supply, and Federal improvements: Custer National Forest...

  13. Variable Generation Power Forecasting as a Big Data Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haupt, Sue Ellen; Kosovic, Branko

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  14. Big Data in Drug Discovery.

    PubMed

    Brown, Nathan; Cambruzzi, Jean; Cox, Peter J; Davies, Mark; Dunbar, James; Plumbley, Dean; Sellwood, Matthew A; Sim, Aaron; Williams-Jones, Bryn I; Zwierzyna, Magdalena; Sheppard, David W

    2018-01-01

    Interpretation of Big Data in the drug discovery community should enhance project timelines and reduce clinical attrition through improved early decision making. The issues we encounter start with the sheer volume of data and how we first ingest it before building an infrastructure to house it to make use of the data in an efficient and productive way. There are many problems associated with the data itself including general reproducibility, but often, it is the context surrounding an experiment that is critical to success. Help, in the form of artificial intelligence (AI), is required to understand and translate the context. On the back of natural language processing pipelines, AI is also used to prospectively generate new hypotheses by linking data together. We explain Big Data from the context of biology, chemistry and clinical trials, showcasing some of the impressive public domain sources and initiatives now available for interrogation. © 2018 Elsevier B.V. All rights reserved.

  15. Research on Optimization of Pooling System and Its Application in Drug Supply Chain Based on Big Data Analysis

    PubMed Central

    2017-01-01

    Reform of drug procurement is being extensively implemented and expanded in China, especially in today's big data environment. However, the pattern of supply mode innovation lags behind procurement improvement. Problems in financial strain and supply break frequently occur, which affect the stability of drug supply. Drug Pooling System is proposed and applied in a few pilot cities to resolve these problems. From the perspective of supply chain, this study analyzes the process of setting important parameters and sets out the tasks of involved parties in a pooling system according to the issues identified in the pilot run. The approach is based on big data analysis and simulation using system dynamic theory and modeling of Vensim software to optimize system performance. This study proposes a theoretical framework to resolve problems and attempts to provide a valuable reference for future application of pooling systems. PMID:28293258

  16. Variable Generation Power Forecasting as a Big Data Problem

    DOE PAGES

    Haupt, Sue Ellen; Kosovic, Branko

    2016-10-10

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  17. An Exercise in Exploring Big Data for Producing Reliable Statistical Information.

    PubMed

    Rey-Del-Castillo, Pilar; Cardeñosa, Jesús

    2016-06-01

    The availability of copious data about many human, social, and economic phenomena is considered an opportunity for the production of official statistics. National statistical organizations and other institutions are more and more involved in new projects for developing what is sometimes seen as a possible change of paradigm in the way statistical figures are produced. Nevertheless, there are hardly any systems in production using Big Data sources. Arguments of confidentiality, data ownership, representativeness, and others make it a difficult task to get results in the short term. Using Call Detail Records from Ivory Coast as an illustration, this article shows some of the issues that must be dealt with when producing statistical indicators from Big Data sources. A proposal of a graphical method to evaluate one specific aspect of the quality of the computed figures is also presented, demonstrating that the visual insight provided improves the results obtained using other traditional procedures.

  18. Big bounce with finite-time singularity: The F(R) gravity description

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.

    An alternative to the Big Bang cosmologies is obtained by the Big Bounce cosmologies. In this paper, we study a bounce cosmology with a Type IV singularity occurring at the bouncing point in the context of F(R) modified gravity. We investigate the evolution of the Hubble radius and we examine the issue of primordial cosmological perturbations in detail. As we demonstrate, for the singular bounce, the primordial perturbations originating from the cosmological era near the bounce do not produce a scale-invariant spectrum and also the short wavelength modes after these exit the horizon, do not freeze, but grow linearly with time. After presenting the cosmological perturbations study, we discuss the viability of the singular bounce model, and our results indicate that the singular bounce must be combined with another cosmological scenario, or should be modified appropriately, in order that it leads to a viable cosmology. The study of the slow-roll parameters leads to the same result indicating that the singular bounce theory is unstable at the singularity point for certain values of the parameters. We also conformally transform the Jordan frame singular bounce, and as we demonstrate, the Einstein frame metric leads to a Big Rip singularity. Therefore, the Type IV singularity in the Jordan frame becomes a Big Rip singularity in the Einstein frame. Finally, we briefly study a generalized singular cosmological model, which contains two Type IV singularities, with quite appealing features.

  19. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Koo, Michelle; Cao, Yu

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less

  20. Functional connectomics from a "big data" perspective.

    PubMed

    Xia, Mingrui; He, Yong

    2017-10-15

    In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  2. Telecom Big Data for Urban Transport Analysis - a Case Study of Split-Dalmatia County in Croatia

    NASA Astrophysics Data System (ADS)

    Baučić, M.; Jajac, N.; Bućan, M.

    2017-09-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the "IPA Adriatic CBC//N.0086/INTERMODAL" project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  3. Elevated levels of plasma Big endothelin-1 and its relation to hypertension and skin lesions in individuals exposed to arsenic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia

    Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nailsmore » were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic-induced skin lesions show elevated plasma Big ET-1 levels. ► Arsenic-induced hypertension and skin lesions may be linked to plasma Big ET-1 levels.« less

  4. Computer Technology and Social Issues.

    ERIC Educational Resources Information Center

    Garson, G. David

    Computing involves social issues and political choices. Issues such as privacy, computer crime, gender inequity, disemployment, and electronic democracy versus "Big Brother" are addressed in the context of efforts to develop a national public policy for information technology. A broad range of research and case studies are examined in an…

  5. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamenshchik, A. Yu.; Manti, S.

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less

  6. Keeping up with Big Data--Designing an Introductory Data Analytics Class

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2016-01-01

    Universities need to keep up with the demand of the business world when it comes to Big Data. The exponential increase in data has put additional demands on academia to meet the big gap in education. Business demand for Big Data has surpassed 1.9 million positions in 2015. Big Data, Business Intelligence, Data Analytics, and Data Mining are the…

  7. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  8. Teaching Information & Technology Skills: The Big6[TM] in Elementary Schools. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This book about using the Big6 information problem solving process model in elementary schools is organized into two parts. Providing an overview of the Big6 approach, Part 1 includes the following chapters: "Introduction: The Need," including the information problem, the Big6 and other process models, and teaching/learning the Big6;…

  9. Big Challenges and Big Opportunities: The Power of "Big Ideas" to Change Curriculum and the Culture of Teacher Planning

    ERIC Educational Resources Information Center

    Hurst, Chris

    2014-01-01

    Mathematical knowledge of pre-service teachers is currently "under the microscope" and the subject of research. This paper proposes a different approach to teacher content knowledge based on the "big ideas" of mathematics and the connections that exist within and between them. It is suggested that these "big ideas"…

  10. Transforming Healthcare Delivery: Integrating Dynamic Simulation Modelling and Big Data in Health Economics and Outcomes Research.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; Pasupathy, Kalyan S; Padula, William V; IJzerman, Maarten J; Wong, Peter K; Higashi, Mitchell K; Engbers, Jordan; Wiebe, Samuel; Crown, William; Osgood, Nathaniel D

    2016-02-01

    In the era of the Information Age and personalized medicine, healthcare delivery systems need to be efficient and patient-centred. The health system must be responsive to individual patient choices and preferences about their care, while considering the system consequences. While dynamic simulation modelling (DSM) and big data share characteristics, they present distinct and complementary value in healthcare. Big data and DSM are synergistic-big data offer support to enhance the application of dynamic models, but DSM also can greatly enhance the value conferred by big data. Big data can inform patient-centred care with its high velocity, volume, and variety (the three Vs) over traditional data analytics; however, big data are not sufficient to extract meaningful insights to inform approaches to improve healthcare delivery. DSM can serve as a natural bridge between the wealth of evidence offered by big data and informed decision making as a means of faster, deeper, more consistent learning from that evidence. We discuss the synergies between big data and DSM, practical considerations and challenges, and how integrating big data and DSM can be useful to decision makers to address complex, systemic health economics and outcomes questions and to transform healthcare delivery.

  11. Big Data’s Role in Precision Public Health

    PubMed Central

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  12. Big Data's Role in Precision Public Health.

    PubMed

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  13. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  14. Native Perennial Forb Variation Between Mountain Big Sagebrush and Wyoming Big Sagebrush Plant Communities

    NASA Astrophysics Data System (ADS)

    Davies, Kirk W.; Bates, Jon D.

    2010-09-01

    Big sagebrush ( Artemisia tridentata Nutt.) occupies large portions of the western United States and provides valuable wildlife habitat. However, information is lacking quantifying differences in native perennial forb characteristics between mountain big sagebrush [ A. tridentata spp. vaseyana (Rydb.) Beetle] and Wyoming big sagebrush [ A. tridentata spp. wyomingensis (Beetle & A. Young) S.L. Welsh] plant communities. This information is critical to accurately evaluate the quality of habitat and forage that these communities can produce because many wildlife species consume large quantities of native perennial forbs and depend on them for hiding cover. To compare native perennial forb characteristics on sites dominated by these two subspecies of big sagebrush, we sampled 106 intact big sagebrush plant communities. Mountain big sagebrush plant communities produced almost 4.5-fold more native perennial forb biomass and had greater native perennial forb species richness and diversity compared to Wyoming big sagebrush plant communities ( P < 0.001). Nonmetric multidimensional scaling (NMS) and the multiple-response permutation procedure (MRPP) demonstrated that native perennial forb composition varied between these plant communities ( P < 0.001). Native perennial forb composition was more similar within plant communities grouped by big sagebrush subspecies than expected by chance ( A = 0.112) and composition varied between community groups ( P < 0.001). Indicator analysis did not identify any perennial forbs that were completely exclusive and faithful, but did identify several perennial forbs that were relatively good indicators of either mountain big sagebrush or Wyoming big sagebrush plant communities. Our results suggest that management plans and habitat guidelines should recognize differences in native perennial forb characteristics between mountain and Wyoming big sagebrush plant communities.

  15. Big endothelin changes the cellular miRNA environment in TMOb osteoblasts and increases mineralization.

    PubMed

    Johnson, Michael G; Kristianto, Jasmin; Yuan, Baozhi; Konicke, Kathryn; Blank, Robert

    2014-08-01

    Endothelin (ET1) promotes the growth of osteoblastic breast and prostate cancer metastases. Conversion of big ET1 to mature ET1, catalyzed primarily by endothelin converting enzyme 1 (ECE1), is necessary for ET1's biological activity. We previously identified the Ece1, locus as a positional candidate gene for a pleiotropic quantitative trait locus affecting femoral size, shape, mineralization, and biomechanical performance. We exposed TMOb osteoblasts continuously to 25 ng/ml big ET1. Cells were grown for 6 days in growth medium and then switched to mineralization medium for an additional 15 days with or without big ET1, by which time the TMOb cells form mineralized nodules. We quantified mineralization by alizarin red staining and analyzed levels of miRNAs known to affect osteogenesis. Micro RNA 126-3p was identified by search as a potential regulator of sclerostin (SOST) translation. TMOb cells exposed to big ET1 showed greater mineralization than control cells. Big ET1 repressed miRNAs targeting transcripts of osteogenic proteins. Big ET1 increased expression of miRNAs that target transcripts of proteins that inhibit osteogenesis. Big ET1 increased expression of 126-3p 121-fold versus control. To begin to assess the effect of big ET1 on SOST production we analyzed both SOST transcription and protein production with and without the presence of big ET1 demonstrating that transcription and translation were uncoupled. Our data show that big ET1 signaling promotes mineralization. Moreover, the results suggest that big ET1's osteogenic effects are potentially mediated through changes in miRNA expression, a previously unrecognized big ET1 osteogenic mechanism.

  16. BigNeuron dataset V.0.0

    DOE Data Explorer

    Ramanathan, Arvind

    2016-01-01

    The cleaned bench testing reconstructions for the gold166 datasets have been put online at github https://github.com/BigNeuron/Events-and-News/wiki/BigNeuron-Events-and-News https://github.com/BigNeuron/Data/releases/tag/gold166_bt_v1.0 The respective image datasets were released a while ago from other sites (major pointer is available at github as well https://github.com/BigNeuron/Data/releases/tag/Gold166_v1 but since the files were big, the actual downloading was distributed at 3 continents separately)

  17. TopoLens: Building a cyberGIS community data service for enhancing the usability of high-resolution National Topographic datasets

    USGS Publications Warehouse

    Hu, Hao; Hong, Xingchen; Terstriep, Jeff; Liu, Yan; Finn, Michael P.; Rush, Johnathan; Wendel, Jeffrey; Wang, Shaowen

    2016-01-01

    Geospatial data, often embedded with geographic references, are important to many application and science domains, and represent a major type of big data. The increased volume and diversity of geospatial data have caused serious usability issues for researchers in various scientific domains, which call for innovative cyberGIS solutions. To address these issues, this paper describes a cyberGIS community data service framework to facilitate geospatial big data access, processing, and sharing based on a hybrid supercomputer architecture. Through the collaboration between the CyberGIS Center at the University of Illinois at Urbana-Champaign (UIUC) and the U.S. Geological Survey (USGS), a community data service for accessing, customizing, and sharing digital elevation model (DEM) and its derived datasets from the 10-meter national elevation dataset, namely TopoLens, is created to demonstrate the workflow integration of geospatial big data sources, computation, analysis needed for customizing the original dataset for end user needs, and a friendly online user environment. TopoLens provides online access to precomputed and on-demand computed high-resolution elevation data by exploiting the ROGER supercomputer. The usability of this prototype service has been acknowledged in community evaluation.

  18. BIG1, a brefeldin A-inhibited guanine nucleotide-exchange protein modulates ABCA1 trafficking and function

    PubMed Central

    Lin, Sisi; Zhou, Chun; Neufeld, Edward; Wang, Yu-Hua; Xu, Suo-Wen; Lu, Liang; Wang, Ying; Liu, Zhi-Ping; Li, Dong; Li, Cuixian; Chen, Shaorui; Le, Kang; Huang, Heqing; Liu, Peiqing; Moss, Joel; Vaughan, Martha; Shen, Xiaoyan

    2013-01-01

    Objective Cell surface localization and intracellular trafficking of ATP-binding cassette transporter A-1 (ABCA1) are essential for its function. However, regulation of these activities is still largely unknown. Brefeldin A (BFA), a uncompetitive inhibitor of brefeldin A-inhibited guanine nucleotide-exchange proteins (BIGs), disturbs the intracellular distribution of ABCA1, and thus inhibits cholesterol efflux. This study aimed to define the possible roles of BIGs in regulating ABCA1 trafficking and cholesterol efflux, and further to explore the potential mechanism. Methods and Results By vesicle immunoprecipitation, we found that BIG1 was associated with ABCA1 in vesicles preparation from rat liver. BIG1 depletion reduced surface ABCA1 on HepG2 cells and inhibited by 60% cholesterol release. In contrast, BIG1 over-expression increased surface ABCA1 and cholesterol secretion. With partial restoration of BIG1 through over-expression in BIG1-depleted cells, surface ABCA1 was also restored. Biotinylation and glutathione cleavage revealed that BIG1 siRNA dramatically decreased the internalization and recycling of ABCA1. This novel function of BIG1 was dependent on the guanine nucleotide-exchange activity and achieved through activation of ADP-ribosylation factor 1 (ARF1). Conclusions BIG1, through its ability to activate ARF1, regulates cell surface levels and function of ABCA1, indicating a transcription-independent mechanism for controlling ABCA1 action. PMID:23220274

  19. Untapped Potential: Fulfilling the Promise of Big Brothers Big Sisters and the Bigs and Littles They Represent

    ERIC Educational Resources Information Center

    Bridgeland, John M.; Moore, Laura A.

    2010-01-01

    American children represent a great untapped potential in our country. For many young people, choices are limited and the goal of a productive adulthood is a remote one. This report paints a picture of who these children are, shares their insights and reflections about the barriers they face, and offers ways forward for Big Brothers Big Sisters as…

  20. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Treesearch

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  1. High School Students as Mentors: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study

    ERIC Educational Resources Information Center

    Herrera, Carla; Kauh, Tina J.; Cooney, Siobhan M.; Grossman, Jean Baldwin; McMaken, Jennifer

    2008-01-01

    High schools have recently become a popular source of mentors for school-based mentoring (SBM) programs. The high school Bigs program of Big Brothers Big Sisters of America, for example, currently involves close to 50,000 high-school-aged mentors across the country. While the use of these young mentors has several potential advantages, their age…

  2. Nursing Needs Big Data and Big Data Needs Nursing.

    PubMed

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  3. Big Data Provenance: Challenges, State of the Art and Opportunities.

    PubMed

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  4. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  5. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  6. A proposed framework of big data readiness in public sectors

    NASA Astrophysics Data System (ADS)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  7. Role of the neutral endopeptidase 24.11 in the conversion of big endothelins in guinea-pig lung parenchyma.

    PubMed Central

    Lebel, N.; D'Orléans-Juste, P.; Fournier, A.; Sirois, P.

    1996-01-01

    1. We have studied the conversion of big endothelin-1 (big ET-1), big endothelin-2 (big ET-2) and big endothelin-3 (big ET-3) and characterized the enzyme involved in the conversion of the three peptides in guinea-pig lung parenchyma (GPLP). 2. Endothelin-1 (ET-1), endothelin-2 (ET-2) and endothelin-3 (ET-3) (10 nM to 100 nM) caused similar concentration-dependent contractions of strips of GPLP. 3. Big ET-1 and big ET-2 also elicited concentration-dependent contractions of GPLP strips. In contrast, big ET-3, up to a concentration of 100 nM, failed to induce a contraction of the GPLP. 4. Incubation of strips of GPLP with the dual endothelin converting enzyme (ECE) and neutral endopeptidase (NEP) inhibitor, phosphoramidon (10 microM), as well as two other NEP inhibitors thiorphan (10 microM) or SQ 28,603 (10 microM) decreased by 43% (P < 0.05), 42% (P < 0.05) and 40% (P < 0.05) the contractions induced by 30 nM of big ET-1 respectively. Captopril (10 microM), an angiotensin-converting enzyme inhibitor, had no effect on the contractions induced by big ET-1. 5. The incubation of strips of GPLP with phosphoramidon (10 microM), thiorphan (10 microM) or SQ 28,603 (10 microM) also decreased by 74% (P < 0.05), 34% and 50% (P < 0.05) the contractions induced by 30 nM big ET-2 respectively. As for the contractions induced by big ET-1, captopril (10 microM) had no effect on the concentration-dependent contractions induced by big ET-2. 6. Phosphoramidon (10 microM), thiorphan (10 microM) and SQ 28,603 (10 microM) significantly potentiated the contractions of strips of GPLP induced by both ET-1 (30 nM) and ET-3 (30 nM). However, the enzymatic inhibitors did not significantly affect the contractions induced by ET-2 (30 nM) in this tissue. 7. These results suggest that the effects of big ET-1 and big ET-2 result from the conversion to ET-1 and ET-2 by at least one enzyme sensitive to phosphoramidon, thiorphan and SQ 28,603. This enzyme corresponds possibly to EC 3.4.24.11 (NEP 24.11) and could also be responsible for the degradation of ETs in the GPLP. Images Figure 4 PMID:8825361

  8. The Study of “big data” to support internal business strategists

    NASA Astrophysics Data System (ADS)

    Ge, Mei

    2018-01-01

    How is big data different from previous data analysis systems? The primary purpose behind traditional small data analytics that all managers are more or less familiar with is to support internal business strategies. But big data also offers a promising new dimension: to discover new opportunities to offer customers high-value products and services. The study focus to introduce some strategists which big data support to. Business decisions using big data can also involve some areas for analytics. They include customer satisfaction, customer journeys, supply chains, risk management, competitive intelligence, pricing, discovery and experimentation or facilitating big data discovery.

  9. Big Data - What is it and why it matters.

    PubMed

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  10. WE-H-BRB-02: Where Do We Stand in the Applications of Big Data in Radiation Oncology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, L.

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  11. WE-H-BRB-00: Big Data in Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  12. Big Data, Big Problems: A Healthcare Perspective.

    PubMed

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  13. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  14. Ontogeny of Big endothelin-1 effects in newborn piglet pulmonary vasculature.

    PubMed

    Liben, S; Stewart, D J; De Marte, J; Perreault, T

    1993-07-01

    Endothelin-1 (ET-1), a 21-amino acid peptide produced by endothelial cells, results from the cleavage of preproendothelin, generating Big ET-1, which is then cleaved by the ET-converting enzyme (ECE) to form ET-1. Big ET-1, like ET-1, is released by endothelial cells. Big ET-1 is equipotent to ET-1 in vivo, whereas its vasoactive effects are less in vitro. It has been suggested that the effects of Big ET-1 depend on its conversion to ET-1. ET-1 has potent vasoactive effects in the newborn pig pulmonary circulation, however, the effects of Big ET-1 remain unknown. Therefore, we studied the effects of Big ET-1 in isolated perfused lungs from 1- and 7-day-old piglets using the ECE inhibitor, phosphoramidon, and the ETA receptor antagonist, BQ-123Na. The rate of conversion of Big ET-1 to ET-1 was measured using radioimmunoassay. ET-1 (10(-13) to 10(-8) M) produced an initial vasodilation, followed by a dose-dependent potent vasoconstriction (P < 0.001), which was equal at both ages. Big ET-1 (10(-11) to 10(-8) M) also produced a dose-dependent vasoconstriction (P < 0.001). The constrictor effects of Big ET-1 and ET-1 were similar in the 1-day-old, whereas in the 7-day-old, the constrictor effect of Big ET-1 was less than that of ET-1 (P < 0.017).(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Development and validation of Big Four personality scales for the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2).

    PubMed

    Calabrese, William R; Rudick, Monica M; Simms, Leonard J; Clark, Lee Anna

    2012-09-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)--such as the Big Three, Big Four, and Big Five trait models--have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, the Big Five Inventory (BFI), and the NEO Five-Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five, as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural validity and external validity were supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing big-trait models. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  16. Development and Validation of Big Four Personality Scales for the Schedule for Nonadaptive and Adaptive Personality-2nd Edition (SNAP-2)

    PubMed Central

    Calabrese, William R.; Rudick, Monica M.; Simms, Leonard J.; Clark, Lee Anna

    2012-01-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)—such as the Big Three, Big Four and Big Five trait models—have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Adaptive and Nonadaptive Personality–2nd Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, Big Five Inventory (BFI), and NEO-Five Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural and external validity was supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing “Big Trait” models. PMID:22250598

  17. Plans, Trains, and Automobiles: Big River Crossing Issues in a Small Community

    DOT National Transportation Integrated Search

    1999-01-01

    This paper addresses cross-cutting topics associated with the replacement of a : regional Mississippi River crossing along the Great River Road. The breadth and : depth of issues define the ease with which transportation problems can be solved. : In ...

  18. 77 FR 49779 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written comments about...

  19. Big sagebrush seed bank densities following wildfires

    USDA-ARS?s Scientific Manuscript database

    Big sagebrush (Artemisia spp.) is a critical shrub to many wildlife species including sage grouse (Centrocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush is killed by wildfires and big sagebrush seed is generally short-lived and do not s...

  20. 10. EASTERLY VIEW OF THE ACCESS ROAD TO THE DOWNSTREAM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. EASTERLY VIEW OF THE ACCESS ROAD TO THE DOWNSTREAM SIDE OF BIG DALTON DAM EXTENDING FROM THE FOOTBRIDGE TO THE GAGING STATION. BIG DALTON DAM IN BACKGROUND. - Big Dalton Dam, 2600 Big Dalton Canyon Road, Glendora, Los Angeles County, CA

  1. A Great Year for the Big Blue Water

    NASA Astrophysics Data System (ADS)

    Leinen, M.

    2016-12-01

    It has been a great year for the big blue water. Last year the 'United_Nations' decided that it would focus on long time remain alright for the big blue water as one of its 'Millenium_Development_Goals'. This is new. In the past the big blue water was never even considered as a part of this world long time remain alright push. Also, last year the big blue water was added to the words of the group of world people paper #21 on cooling the air and things. It is hard to believe that the big blue water was not in the paper before because 70% of the world is covered by the big blue water! Many people at the group of world meeting were from our friends at 'AGU'.

  2. [Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016].

    PubMed

    Moebus, Susanne; Kuhn, Joseph; Hoffmann, Wolfgang

    2017-11-01

    Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Big Data Provenance: Challenges, State of the Art and Opportunities

    PubMed Central

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2017-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data. PMID:29399671

  4. Big Biology: Supersizing Science During the Emergence of the 21st Century

    PubMed Central

    Vermeulen, Niki

    2017-01-01

    Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209

  5. 77 FR 64827 - Certain Lighting Control Devices Including Dimmer Switches and Parts Thereof (IV); Final...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-23

    ...Notice is hereby given that the U.S. International Trade Commission has terminated the above-captioned investigation with a finding of violation of section 337, and has issued a general exclusion order directed against infringing lighting control devices including dimmer switches and parts thereof, and cease and desist orders directed against respondents American Top Electric Corp. (``American Top'') and Big Deal Electric Corp. (``Big Deal''), both of Santa Ana, California; Elemental LED, LLC d/b/a Diode LED (``Elemental'') of Emeryville, California; and Zhejiang Yuelong Mechanical and Electrical Co. (``Zhejiang Yuelong'') of Zhejiang, China.

  6. Applying computation biology and "big data" to develop multiplex diagnostics for complex chronic diseases such as osteoarthritis.

    PubMed

    Ren, Guomin; Krawetz, Roman

    2015-01-01

    The data explosion in the last decade is revolutionizing diagnostics research and the healthcare industry, offering both opportunities and challenges. These high-throughput "omics" techniques have generated more scientific data in the last few years than in the entire history of mankind. Here we present a brief summary of how "big data" have influenced early diagnosis of complex diseases. We will also review some of the most commonly used "omics" techniques and their applications in diagnostics. Finally, we will discuss the issues brought by these new techniques when translating laboratory discoveries to clinical practice.

  7. Evaluation of Healthcare Interventions and Big Data: Review of Associated Data Issues.

    PubMed

    Asche, Carl V; Seal, Brian; Kahler, Kristijan H; Oehrlein, Elisabeth M; Baumgartner, Meredith Greer

    2017-08-01

    Although the analysis of 'big data' holds tremendous potential to improve patient care, there remain significant challenges before it can be realized. Accuracy and completeness of data, linkage of disparate data sources, and access to data are areas that require particular focus. This article discusses these areas and shares strategies to promote progress. Improvement in clinical coding, innovative matching methodologies, and investment in data standardization are potential solutions to data validation and linkage problems. Challenges to data access still require significant attention with data ownership, security needs, and costs representing significant barriers to access.

  8. Putting the methodological brakes on claims to measure national happiness through Twitter: Methodological limitations in social media analytics.

    PubMed

    Jensen, Eric Allen

    2017-01-01

    With the rapid global proliferation of social media, there has been growing interest in using this existing source of easily accessible 'big data' to develop social science knowledge. However, amidst the big data gold rush, it is important that long-established principles of good social research are not ignored. This article critically evaluates Mitchell et al.'s (2013) study, 'The Geography of Happiness: Connecting Twitter Sentiment and Expression, Demographics, and Objective Characteristics of Place', demonstrating the importance of attending to key methodological issues associated with secondary data analysis.

  9. Petition to Object to Otter Tail Power Company's Big Stone Power Plant, Big Stone City, South Dakota, Title V Operating Permit

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  10. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of balance'' in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studiedmore » is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers' opinions.« less

  11. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of ``balance`` in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studiedmore » is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers` opinions.« less

  12. BIG1 is required for the survival of deep layer neurons, neuronal polarity, and the formation of axonal tracts between the thalamus and neocortex in developing brain

    PubMed Central

    Teoh, Jia-Jie; Iwano, Tomohiko; Kunii, Masataka; Atik, Nur; Avriyanti, Erda; Yoshimura, Shin-ichiro; Moriwaki, Kenta

    2017-01-01

    BIG1, an activator protein of the small GTPase, Arf, and encoded by the Arfgef1 gene, is one of candidate genes for epileptic encephalopathy. To know the involvement of BIG1 in epileptic encephalopathy, we analyzed BIG1-deficient mice and found that BIG1 regulates neurite outgrowth and brain development in vitro and in vivo. The loss of BIG1 decreased the size of the neocortex and hippocampus. In BIG1-deficient mice, the neuronal progenitor cells (NPCs) and the interneurons were unaffected. However, Tbr1+ and Ctip2+ deep layer (DL) neurons showed spatial-temporal dependent apoptosis. This apoptosis gradually progressed from the piriform cortex (PIR), peaked in the neocortex, and then progressed into the hippocampus from embryonic day 13.5 (E13.5) to E17.5. The upper layer (UL) and DL order in the neocortex was maintained in BIG1-deficient mice, but the excitatory neurons tended to accumulate before their destination layers. Further pulse-chase migration assay showed that the migration defect was non-cell autonomous and secondary to the progression of apoptosis into the BIG1-deficient neocortex after E15.5. In BIG1-deficient mice, we observed an ectopic projection of corticothalamic axons from the primary somatosensory cortex (S1) into the dorsal lateral geniculate nucleus (dLGN). The thalamocortical axons were unable to cross the diencephalon–telencephalon boundary (DTB). In vitro, BIG1-deficient neurons showed a delay in neuronal polarization. BIG1-deficient neurons were also hypersensitive to low dose glutamate (5 μM), and died via apoptosis. This study showed the role of BIG1 in the survival of DL neurons in developing embryonic brain and in the generation of neuronal polarity. PMID:28414797

  13. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... held at the Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written...

  14. Application and Exploration of Big Data Mining in Clinical Medicine.

    PubMed

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  15. BigDog

    NASA Astrophysics Data System (ADS)

    Playter, R.; Buehler, M.; Raibert, M.

    2006-05-01

    BigDog's goal is to be the world's most advanced quadruped robot for outdoor applications. BigDog is aimed at the mission of a mechanical mule - a category with few competitors to date: power autonomous quadrupeds capable of carrying significant payloads, operating outdoors, with static and dynamic mobility, and fully integrated sensing. BigDog is about 1 m tall, 1 m long and 0.3 m wide, and weighs about 90 kg. BigDog has demonstrated walking and trotting gaits, as well as standing up and sitting down. Since its creation in the fall of 2004, BigDog has logged tens of hours of walking, climbing and running time. It has walked up and down 25 & 35 degree inclines and trotted at speeds up to 1.8 m/s. BigDog has walked at 0.7 m/s over loose rock beds and carried over 50 kg of payload. We are currently working to expand BigDog's rough terrain mobility through the creation of robust locomotion strategies and terrain sensing capabilities.

  16. Creating value in health care through big data: opportunities and policy implications.

    PubMed

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  17. Big Data Management in US Hospitals: Benefits and Barriers.

    PubMed

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  18. Study on Effects of Different Replacement Rate on Bending Behavior of Big Recycled Aggregate Self Compacting Concrete

    NASA Astrophysics Data System (ADS)

    Li, Jing; Guo, Tiantian; Gao, Shuai; Jiang, Lin; Zhao, Zhijun; Wang, Yalin

    2018-03-01

    Big recycled aggregate self compacting concrete is a new type of recycled concrete, which has the advantages of low hydration heat and green environmental protection, but its bending behavior can be affected by different replacement rate. Therefor, in this paper, the research status of big Recycled aggregate self compacting concrete was systematically introduced, and the effect of different replacement rate of big recycled aggregate on failure mode, crack distribution and bending strength of the beam were studied through the bending behavior test of 4 big recycled aggregate self compacting concrete beams. The results show that: The crack distribution of the beam can be affected by the replacement rate; The failure modes of big recycled aggregate beams are the same as those of ordinary concrete; The plane section assumption is applicable to the big recycled aggregate self compacting concrete beam; The higher the replacement rate, the lower the bending strength of big recycled aggregate self compacting concrete beams.

  19. Water-quality effects on phytoplankton species and density and trophic state indices at Big Base and Little Base Lakes, Little Rock Air Force Base, Arkansas, June through August, 2015

    USGS Publications Warehouse

    Driver, Lucas; Justus, Billy

    2016-01-01

    Big Base and Little Base Lakes are located on Little Rock Air Force Base, Arkansas, and their close proximity to a dense residential population and an active military/aircraft installation make the lakes vulnerable to water-quality degradation. The U.S. Geological Survey (USGS) conducted a study from June through August 2015 to investigate the effects of water quality on phytoplankton species and density and trophic state in Big Base and Little Base Lakes, with particular regard to nutrient concentrations. Nutrient concentrations, trophic-state indices, and the large part of the phytoplankton biovolume composed of cyanobacteria, indicate eutrophic conditions were prevalent for Big Base and Little Base Lakes, particularly in August 2015. Cyanobacteria densities and biovolumes measured in this study likely pose a low to moderate risk of adverse algal toxicity, and the high proportion of filamentous cyanobacteria in the lakes, in relation to other algal groups, is important from a fisheries standpoint because these algae are a poor food source for many aquatic taxa. In both lakes, total nitrogen to total phosphorus (N:P) ratios declined over the sampling period as total phosphorus concentrations increased relative to nitrogen concentrations. The N:P ratios in the August samples (20:1 and 15:1 in Big Base and Little Base Lakes, respectively) and other indications of eutrophic conditions are of concern and suggest that exposure of the two lakes to additional nutrients could cause unfavorable dissolved-oxygen conditions and increase the risk of cyanobacteria blooms and associated cyanotoxin issues.

  20. Consolidation in the health care market: good or bad for consumers.

    PubMed

    1996-02-01

    The health care system has been transforming itself; the big fish are eating the small fish; the small fish are combining to grow big. Providers, including hospitals, physicians, and health plans, are aggressively buying, selling, merging, entering into joint ventures, and otherwise consolidating into larger affiliations. These new enterprises, in turn, form networks that can deliver all the health services a patient may need in a lifetime--acute, home health, and nursing home care, and more--and then compete for insurance contracts to cover the greatest possible number of people. While this market reorganization seems to have a role in tempering spiraling costs, it raises other issues of choice, quality, and access. This issue of States of Health examines how consumer interests are faring in the changing market place.

  1. Utilization of Large Data Sets in Maternal Health in Finland: A Case for Global Health Research.

    PubMed

    Lamminpää, Reeta; Gissler, Mika; Vehviläinen-Julkunen, Katri

    In recent years, the use of large data sets, such as electronic health records, has increased. These large data sets are often referred to as "Big Data," which have various definitions. The purpose of this article was to summarize and review the utilization, strengths, and challenges of register data, which means a written record containing regular entries of items or details, and Big Data, especially in maternal nursing, using 4 examples of studies from the Finnish Medical Birth Register data and relate these to other international databases and data sets. Using large health register data is crucial when studying and understanding outcomes of maternity care. This type of data enables comparisons on a population level and can be utilized in research related to maternal health, with important issues and implications for future research and clinical practice. Although there are challenges connected with register data and Big Data, these large data sets offer the opportunity for timely insight into population-based information on relevant research topics in maternal health. Nurse researchers need to understand the possibilities and limitations of using existing register data in maternity research. Maternal child nurse researchers can be leaders of the movement to utilize Big Data to improve global maternal health.

  2. Big data are coming to psychiatry: a general introduction.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Bauer, Michael

    2015-12-01

    Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.

  3. Amplitude-oriented exercise in Parkinson's disease: a randomized study comparing LSVT-BIG and a short training protocol.

    PubMed

    Ebersbach, Georg; Grust, Ute; Ebersbach, Almut; Wegner, Brigitte; Gandor, Florin; Kühn, Andrea A

    2015-02-01

    LSVT-BIG is an exercise for patients with Parkinson's disease (PD) comprising of 16 1-h sessions within 4 weeks. LSVT-BIG was compared with a 2-week short protocol (AOT-SP) consisting of 10 sessions with identical exercises in 42 patients with PD. UPDRS-III-score was reduced by -6.6 in LSVT-BIG and -5.7 in AOT-SP at follow-up after 16 weeks (p < 0.001). Measures of motor performance were equally improved by LSVT-BIG and AOT-SP but high-intensity LSVT-BIG was more effective to obtain patient-perceived benefit.

  4. Big data in psychology: A framework for research advancement.

    PubMed

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. The BIG Data Center: from deposition to integration to translation

    PubMed Central

    2017-01-01

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. PMID:27899658

  6. Exploiting big data for critical care research.

    PubMed

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  7. Thermochronology, Uplift and Erosion at the Australian-Pacific Plate Boundary Alpine Fault restraining bend, New Zealand

    NASA Astrophysics Data System (ADS)

    Sagar, M. W.; Seward, D.; Norton, K. P.

    2016-12-01

    The 650 km-long Australian-Pacific plate boundary Alpine Fault is remarkably straight at a regional scale, except for a prominent S-shaped bend in the northern South Island. This is a restraining bend and has been referred to as the `Big Bend' due to similarities with the Transverse Ranges section of the San Andreas Fault. The Alpine Fault is the main source of seismic hazard in the South Island, yet there are no constraints on slip rates at the Big Bend. Furthermore, the timing of Big Bend development is poorly constrained to the Miocene. To address these issues we are using the fission-track (FT) and 40Ar/39Ar thermochronometers, together with basin-averaged cosmogenic nuclide 10Be concentrations to constrain the onset and rate of Neogene-Quaternary exhumation of the Australian and Pacific plates at the Big Bend. Exhumation rates at the Big Bend are expected to be greater than those for adjoining sections of the Alpine Fault due to locally enhanced shortening. Apatite FT ages and modelled thermal histories indicate that exhumation of the Australian Plate had begun by 13 Ma and 3 km of exhumation has occurred since that time, requiring a minimum exhumation rate of 0.2 mm/year. In contrast, on the Pacific Plate, zircon FT cooling ages suggest ≥7 km of exhumation in the past 2-3 Ma, corresponding to a minimum exhumation rate of 2 mm/year. Preliminary assessment of stream channel gradients either side of the Big Bend suggests equilibrium between uplift and erosion. The implication of this is that Quaternary erosion rates estimated from 10Be concentrations will approximate uplift rates. These uplift rates will help to better constrain the dip-slip rate of the Alpine Fault, which will allow the National Seismic Hazard Model to be updated.

  8. THE BERKELEY DATA ANALYSIS SYSTEM (BDAS): AN OPEN SOURCE PLATFORM FOR BIG DATA ANALYTICS

    DTIC Science & Technology

    2017-09-01

    Evan Sparks, Oliver Zahn, Michael J. Franklin, David A. Patterson, Saul Perlmutter. Scientific Computing Meets Big Data Technology: An Astronomy ...Processing Astronomy Imagery Using Big Data Technology. IEEE Transaction on Big Data, 2016. Approved for Public Release; Distribution Unlimited. 22 [93

  9. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  10. Managing Fleet Wide Sensory Data: Lessons Learned in Dealing with Volume, Velocity, Variety, Veracity, Value and Visibility

    DTIC Science & Technology

    2014-10-02

    hadoop / Bradicich, T. & Orci, S. (2012). Moore’s Law of Big Data National Instruments Instrumentation News. December 2012...accurate and meaningful conclusions from such a large amount of data is a growing problem, and the term “ Big Data ” describes this phenomenon. Big Data ...is “ Big Data ”. 2. HISTORY OF BIG DATA The technology research firm International Data Corporation (IDC) recently performed a study on digital

  11. WE-H-BRB-01: Overview of the ASTRO-NIH-AAPM 2015 Workshop On Exploring Opportunities for Radiation Oncology in the Era of Big Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedict, S.

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  12. Geologic map of Big Bend National Park, Texas

    USGS Publications Warehouse

    Turner, Kenzie J.; Berry, Margaret E.; Page, William R.; Lehman, Thomas M.; Bohannon, Robert G.; Scott, Robert B.; Miggins, Daniel P.; Budahn, James R.; Cooper, Roger W.; Drenth, Benjamin J.; Anderson, Eric D.; Williams, Van S.

    2011-01-01

    The purpose of this map is to provide the National Park Service and the public with an updated digital geologic map of Big Bend National Park (BBNP). The geologic map report of Maxwell and others (1967) provides a fully comprehensive account of the important volcanic, structural, geomorphological, and paleontological features that define BBNP. However, the map is on a geographically distorted planimetric base and lacks topography, which has caused difficulty in conducting GIS-based data analyses and georeferencing the many geologic features investigated and depicted on the map. In addition, the map is outdated, excluding significant data from numerous studies that have been carried out since its publication more than 40 years ago. This report includes a modern digital geologic map that can be utilized with standard GIS applications to aid BBNP researchers in geologic data analysis, natural resource and ecosystem management, monitoring, assessment, inventory activities, and educational and recreational uses. The digital map incorporates new data, many revisions, and greater detail than the original map. Although some geologic issues remain unresolved for BBNP, the updated map serves as a foundation for addressing those issues. Funding for the Big Bend National Park geologic map was provided by the United States Geological Survey (USGS) National Cooperative Geologic Mapping Program and the National Park Service. The Big Bend mapping project was administered by staff in the USGS Geology and Environmental Change Science Center, Denver, Colo. Members of the USGS Mineral and Environmental Resources Science Center completed investigations in parallel with the geologic mapping project. Results of these investigations addressed some significant current issues in BBNP and the U.S.-Mexico border region, including contaminants and human health, ecosystems, and water resources. Funding for the high-resolution aeromagnetic survey in BBNP, and associated data analyses and interpretation, was from the USGS Crustal Geophysics and Geochemistry Science Center. Mapping contributed from university professors and students was mostly funded by independent sources, including academic institutions, private industry, and other agencies.

  13. Attitudes toward jaguars and pumas and the acceptability of killing big cats in the Brazilian Atlantic Forest: An application of the Potential for Conflict Index2.

    PubMed

    Engel, Monica T; Vaske, Jerry J; Bath, Alistair J; Marchini, Silvio

    2017-09-01

    We explored the overall acceptability of killing jaguars and pumas in different scenarios of people-big cat interactions, the influence of attitudes toward big cats on acceptability, and the level of consensus on the responses. Data were obtained from 326 self-administered questionnaires in areas adjacent to Intervales State Park and Alto Ribeira State Park. Overall, people held slightly positive attitudes toward jaguars and pumas and viewed the killing of big cats as unacceptable. However, individuals that held negative attitudes were more accepting of killing. As the severity of people-big cat interactions increased, the level of consensus decreased. Knowing whether killing a big cat is acceptable or unacceptable in specific situations allows managers to anticipate conflict and avoid illegal killing of big cats.

  14. Propeptide big-endothelin, N-terminal-pro brain natriuretic peptide and mortality. The Ludwigshafen risk and cardiovascular health (LURIC) study.

    PubMed

    Gergei, Ingrid; Krämer, Bernhard K; Scharnagl, Hubert; Stojakovic, Tatjana; März, Winfried; Mondorf, Ulrich

    The endothelin system (Big-ET-1) is a key regulator in cardiovascular (CV) disease and congestive heart failure (CHF). We have examined the incremental value of Big-ET-1 in predicting total and CV mortality next to the well-established CV risk marker N-Terminal Pro-B-Type Natriuretic Peptide (NT-proBNP). Big-ET-1 and NT-proBNP were determined in 2829 participants referred for coronary angiography (follow-up 9.9 years). Big-ET-1 is an independent predictor of total, CV mortality and death due to CHF. The conjunct use of Big-ET-1 and NT-proBNP improves the risk stratification of patients with intermediate to high risk of CV death and CHF. Big-ET-1improves risk stratification in patients referred for coronary angiography.

  15. Personality and job performance: the Big Five revisited.

    PubMed

    Hurtz, G M; Donovan, J J

    2000-12-01

    Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.

  16. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    PubMed

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  17. ["Big data" - large data, a lot of knowledge?].

    PubMed

    Hothorn, Torsten

    2015-01-28

    Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.

  18. 78 FR 33326 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held July 15, 2013 at 3:00 p.m. ADDRESSES: The meeting will be held at Big Horn County Weed and...

  19. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held on March 3, 2011, and will begin at 10 a.m. ADDRESSES: The meeting will be held at the Big...

  20. 76 FR 7837 - Big Rivers Electric Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ11-11-000] Big Rivers Electric Corporation; Notice of Filing Take notice that on February 4, 2011, Big Rivers Electric Corporation (Big Rivers) filed a notice of cancellation of its Second Revised and Restated Open Access...

  1. Countering misinformation concerning big sagebrush

    Treesearch

    Bruce L Welch; Craig Criddle

    2003-01-01

    This paper examines the scientific merits of eight axioms of range or vegetative management pertaining to big sagebrush. These axioms are: (1) Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis) does not naturally exceed 10 percent canopy cover and mountain big sagebrush (A. t. ssp. vaseyana) does not naturally exceed 20 percent canopy...

  2. 76 FR 63714 - Big Spring Rail System, Inc.;Operation Exemption;Transport Handling Specialists, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. FD 35553] Big Spring Rail System, Inc.;Operation Exemption;Transport Handling Specialists, Inc. Big Spring Rail System, Inc. (BSRS...., owned by the City of Big Spring, Tex. (City). BSRS will be operating the line for Transport Handling...

  3. Data_Flood: Helping the Navy Address the Rising Tide of Sensor Information

    DTIC Science & Technology

    2014-01-01

    the influx of data . This report describes the Navy’s “ big data ” challenge and outlines potential solutions involv- ing changes along four dimensions...ChAPTer One Big Data : Challenges and Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 What Is “ Big Data ...3 The Navy’s Big Data

  4. Research on information security in big data era

    NASA Astrophysics Data System (ADS)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  5. [Embracing medical innovation in the era of big data].

    PubMed

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  6. Data: Big and Small.

    PubMed

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  7. Small values in big data: The continuing need for appropriate metadata

    USGS Publications Warehouse

    Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung

    2018-01-01

    Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.

  8. Big data from small data: data-sharing in the ‘long tail’ of neuroscience

    PubMed Central

    Ferguson, Adam R; Nielson, Jessica L; Cragin, Melissa H; Bandrowski, Anita E; Martone, Maryann E

    2016-01-01

    The launch of the US BRAIN and European Human Brain Projects coincides with growing international efforts toward transparency and increased access to publicly funded research in the neurosciences. The need for data-sharing standards and neuroinformatics infrastructure is more pressing than ever. However, ‘big science’ efforts are not the only drivers of data-sharing needs, as neuroscientists across the full spectrum of research grapple with the overwhelming volume of data being generated daily and a scientific environment that is increasingly focused on collaboration. In this commentary, we consider the issue of sharing of the richly diverse and heterogeneous small data sets produced by individual neuroscientists, so-called long-tail data. We consider the utility of these data, the diversity of repositories and options available for sharing such data, and emerging best practices. We provide use cases in which aggregating and mining diverse long-tail data convert numerous small data sources into big data for improved knowledge about neuroscience-related disorders. PMID:25349910

  9. Kasner solutions, climbing scalars and big-bang singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Condeescu, Cezar; Dudas, Emilian, E-mail: cezar.condeescu@roma2.infn.it, E-mail: emilian.dudas@cpht.polytechnique.fr

    We elaborate on a recently discovered phenomenon where a scalar field close to big-bang is forced to climb a steep potential by its dynamics. We analyze the phenomenon in more general terms by writing the leading order equations of motion near the singularity. We formulate the conditions for climbing to exist in the case of several scalars and after inclusion of higher-derivative corrections and we apply our results to some models of moduli stabilization. We analyze an example with steep stabilizing potential and notice again a related critical behavior: for a potential steepness above a critical value, going backwards towardsmore » big-bang, the scalar undergoes wilder oscillations, with the steep potential pushing it back at every passage and not allowing the scalar to escape to infinity. Whereas it was pointed out earlier that there are possible implications of the climbing phase to CMB, we point out here another potential application, to the issue of initial conditions in inflation.« less

  10. Data management by using R: big data clinical research series.

    PubMed

    Zhang, Zhongheng

    2015-11-01

    Electronic medical record (EMR) system has been widely used in clinical practice. Instead of traditional record system by hand writing and recording, the EMR makes big data clinical research feasible. The most important feature of big data research is its real-world setting. Furthermore, big data research can provide all aspects of information related to healthcare. However, big data research requires some skills on data management, which however, is always lacking in the curriculum of medical education. This greatly hinders doctors from testing their clinical hypothesis by using EMR. To make ends meet, a series of articles introducing data management techniques are put forward to guide clinicians to big data clinical research. The present educational article firstly introduces some basic knowledge on R language, followed by some data management skills on creating new variables, recoding variables and renaming variables. These are very basic skills and may be used in every project of big data research.

  11. Big Data Knowledge in Global Health Education.

    PubMed

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  12. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  13. Breaking Sound Barriers: New Perspectives on Effective Big Band Development and Rehearsal

    ERIC Educational Resources Information Center

    Greig, Jeremy; Lowe, Geoffrey

    2014-01-01

    Jazz big band is a common extra-curricular musical activity in Western Australian secondary schools. Jazz big band offers important fundamentals that can help expand a student's musical understanding. However, the teaching of conventions associated with big band jazz has often been haphazard and can be daunting and frightening, especially for…

  14. Big Data and Chemical Education

    ERIC Educational Resources Information Center

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  15. Mountain big sagebrush (Artemisia tridentata spp vaseyana) seed production

    Treesearch

    Melissa L. Landeen

    2015-01-01

    Big sagebrush (Artemisia tridentata Nutt.) is the most widespread and common shrub in the sagebrush biome of western North America. Of the three most common subspecies of big sagebrush (Artemisia tridentata), mountain big sagebrush (ssp. vaseyana; MBS) is the most resilient to disturbance, but still requires favorable climactic conditions and a viable post-...

  16. Sports and the Big6: The Information Advantage.

    ERIC Educational Resources Information Center

    Eisenberg, Mike

    1997-01-01

    Explores the connection between sports and the Big6 information problem-solving process and how sports provides an ideal setting for learning and teaching about the Big6. Topics include information aspects of baseball, football, soccer, basketball, figure skating, track and field, and golf; and the Big6 process applied to sports. (LRW)

  17. A Big Data Analytics Methodology Program in the Health Sector

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  18. New Evidence on the Development of the Word "Big."

    ERIC Educational Resources Information Center

    Sena, Rhonda; Smith, Linda B.

    1990-01-01

    Results indicate that curvilinear trend in children's understanding of word "big" is not obtained in all stimulus contexts. This suggests that meaning and use of "big" is complex, and may not refer simply to larger objects in a set. Proposes that meaning of "big" constitutes a dynamic system driven by many perceptual,…

  19. The New Improved Big6 Workshop Handbook. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This handbook is intended to help classroom teachers, teacher-librarians, technology teachers, administrators, parents, community members, and students to learn about the Big6 Skills approach to information and technology skills, to use the Big6 process in their own activities, and to implement a Big6 information and technology skills program. The…

  20. 9. SOUTHERLY VIEW OF THE ACCESS ROAD TO THE DOWNSTREAM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. SOUTHERLY VIEW OF THE ACCESS ROAD TO THE DOWNSTREAM SIDE OF BIG DALTON DAM EXTENDING FROM THE DAM TO THE FOOTBRIDGE. VIEW FROM BIG DALTON DAM SHOWING THE TOE WEIR IN FOREGROUND AND FOOTBRIDGE IN BACKGROUND. - Big Dalton Dam, 2600 Big Dalton Canyon Road, Glendora, Los Angeles County, CA

  1. 2. OVERVIEW OF POWERHOUSE 8 COMPLEX. POWERHOUSE IS VISIBLE AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. OVERVIEW OF POWERHOUSE 8 COMPLEX. POWERHOUSE IS VISIBLE AT UPPER PHOTO CENTER. BUILDING 105 IS PROMINENT TRANSVERSE GABLE ROOF AT LOWER PHOTO CENTER. BIG CREEK CURVES AROUND BUILDINGS AT LOWER PHOTO. VIEW TO WEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  2. View of New Big Oak Flat Road seen from Old ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of New Big Oak Flat Road seen from Old Wawona Road near location of photograph HAER CA-148-17. Note road cuts, alignment, and tunnels. Devils Dance Floor at left distance. Looking northwest - Big Oak Flat Road, Between Big Oak Flat Entrance & Merced River, Yosemite Village, Mariposa County, CA

  3. 78 FR 37792 - Mario Julian Martinez-Bernache, Inmate Number #95749-279, CI Big Spring, Corrections Institution...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    ... DEPARTMENT OF COMMERCE Bureau of Industry and Security Mario Julian Martinez-Bernache, Inmate Number 95749-279, CI Big Spring, Corrections Institution, 2001 Rickabaugh Drive, Big Spring, TX 79720... Spring, Corrections Institution, 2001 Rickabaugh Drive, Big Spring, TX 79720, and when acting for or on...

  4. Application and Exploration of Big Data Mining in Clinical Medicine

    PubMed Central

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  5. WE-H-BRB-03: Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNutt, T.

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  6. The BIG Data Center: from deposition to integration to translation.

    PubMed

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Big Data in Healthcare – Defining the Digital Persona through User Contexts from the Micro to the Macro

    PubMed Central

    Monkman, H.; Petersen, C.; Weber, J.; Borycki, E. M.; Adams, S.; Collins, S.

    2014-01-01

    Summary Objectives While big data offers enormous potential for improving healthcare delivery, many of the existing claims concerning big data in healthcare are based on anecdotal reports and theoretical vision papers, rather than scientific evidence based on empirical research. Historically, the implementation of health information technology has resulted in unintended consequences at the individual, organizational and social levels, but these unintended consequences of collecting data have remained unaddressed in the literature on big data. The objective of this paper is to provide insights into big data from the perspective of people, social and organizational considerations. Method We draw upon the concept of persona to define the digital persona as the intersection of data, tasks and context for different user groups. We then describe how the digital persona can serve as a framework to understanding sociotechnical considerations of big data implementation. We then discuss the digital persona in the context of micro, meso and macro user groups across the 3 Vs of big data. Results We provide insights into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery. We also provide a framework for defining the digital persona at a micro, meso and macro level to help understand the user contexts of big data solutions. Conclusion While big data provides great potential for improving healthcare delivery, it is essential that we consider the individual, social and organizational contexts of data use when implementing big data solutions. PMID:25123726

  8. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    PubMed

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  9. Managing, Analysing, and Integrating Big Data in Medical Bioinformatics: Open Problems and Future Perspectives

    PubMed Central

    Merelli, Ivan; Pérez-Sánchez, Horacio; Gesing, Sandra; D'Agostino, Daniele

    2014-01-01

    The explosion of the data both in the biomedical research and in the healthcare systems demands urgent solutions. In particular, the research in omics sciences is moving from a hypothesis-driven to a data-driven approach. Healthcare is additionally always asking for a tighter integration with biomedical data in order to promote personalized medicine and to provide better treatments. Efficient analysis and interpretation of Big Data opens new avenues to explore molecular biology, new questions to ask about physiological and pathological states, and new ways to answer these open issues. Such analyses lead to better understanding of diseases and development of better and personalized diagnostics and therapeutics. However, such progresses are directly related to the availability of new solutions to deal with this huge amount of information. New paradigms are needed to store and access data, for its annotation and integration and finally for inferring knowledge and making it available to researchers. Bioinformatics can be viewed as the “glue” for all these processes. A clear awareness of present high performance computing (HPC) solutions in bioinformatics, Big Data analysis paradigms for computational biology, and the issues that are still open in the biomedical and healthcare fields represent the starting point to win this challenge. PMID:25254202

  10. The BIG Score and Prediction of Mortality in Pediatric Blunt Trauma.

    PubMed

    Davis, Adrienne L; Wales, Paul W; Malik, Tahira; Stephens, Derek; Razik, Fathima; Schuh, Suzanne

    2015-09-01

    To examine the association between in-hospital mortality and the BIG (composed of the base deficit [B], International normalized ratio [I], Glasgow Coma Scale [G]) score measured on arrival to the emergency department in pediatric blunt trauma patients, adjusted for pre-hospital intubation, volume administration, and presence of hypotension and head injury. We also examined the association between the BIG score and mortality in patients requiring admission to the intensive care unit (ICU). A retrospective 2001-2012 trauma database review of patients with blunt trauma ≤ 17 years old with an Injury Severity score ≥ 12. Charts were reviewed for in-hospital mortality, components of the BIG score upon arrival to the emergency department, prehospital intubation, crystalloids ≥ 20 mL/kg, presence of hypotension, head injury, and disposition. 50/621 (8%) of the study patients died. Independent mortality predictors were the BIG score (OR 11, 95% CI 6-25), prior fluid bolus (OR 3, 95% CI 1.3-9), and prior intubation (OR 8, 95% CI 2-40). The area under the receiver operating characteristic curve was 0.95 (CI 0.93-0.98), with the optimal BIG cutoff of 16. With BIG <16, death rate was 3/496 (0.006, 95% CI 0.001-0.007) vs 47/125 (0.38, 95% CI 0.15-0.7) with BIG ≥ 16, (P < .0001). In patients requiring admission to the ICU, the BIG score remained predictive of mortality (OR 14.3, 95% CI 7.3-32, P < .0001). The BIG score accurately predicts mortality in a population of North American pediatric patients with blunt trauma independent of pre-hospital interventions, presence of head injury, and hypotension, and identifies children with a high probability of survival (BIG <16). The BIG score is also associated with mortality in pediatric patients with trauma requiring admission to the ICU. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Plasma big endothelin-1 level and the severity of new-onset stable coronary artery disease.

    PubMed

    Chen, Juan; Chen, Man-Hua; Guo, Yuan-Lin; Zhu, Cheng-Gang; Xu, Rui-Xia; Dong, Qian; Li, Jian-Jun

    2015-01-01

    To investigate the usefulness of the plasma big endothelin-1 (big ET-1) level in predicting the severity of new-onset stable angiography-proven coronary artery disease (CAD). A total of 963 consecutive stable CAD patients with more than 50% stenosis in at least one main vessel were enrolled. The patients were classified into the three groups according to the tertile of the Gensini score (GS, low GS <20, n=300; intermediate GS 20-40, n=356 and high GS >40, n=307), and the relationship between the big ET-1 level and GS was evaluated. The plasma levels of big ET-1 increased significantly in association with increases in the GS tertile (p=0.007). A multivariate analysis suggested that the plasma big ET-1 level was an independent predictor for a high GS (OR=2.26, 95%CI: 1.23-4.15, p=0.009), and there was a positive correlation between the big ET-1 level and the GS (r=0.20, p=0.000). The area under the receiver operating characteristic curve (AUC) for the big ET-1 level in predicting a high GS was 0.64 (95% CI 0.60-0.68, p=0.000), and the optimal cutoff value for the plasma big ET-1 level for predicting a high GS was 0.34 fmol/mL, with a sensitivity of 62.6% and specificity of 60.3%. In the high-big ET-1 level group (≥0.34 fmol/mL), there were significantly increased rates of three-vessel disease (43.6% vs. 35.4%, p=0.017) and a high GS [31 (17-54) vs. 24 (16-44), p=0.001] compared with that observed in the low-big ET-1 level group. The present findings indicate that the plasma big ET-1 level is a useful predictor of the severity of new-onset stable CAD associated with significant stenosis.

  12. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    PubMed

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  13. Metal atom dynamics in superbulky metallocenes: a comparison of (Cp(BIG))2Sn and (Cp(BIG))2Eu.

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Schwerdtfeger, Peter; Nowik, Israel; Herber, Rolfe H

    2014-02-17

    Cp(BIG)2Sn (Cp(BIG) = (4-n-Bu-C6H4)5cyclopentadienyl), prepared by reaction of 2 equiv of Cp(BIG)Na with SnCl2, crystallized isomorphous to other known metallocenes with this ligand (Ca, Sr, Ba, Sm, Eu, Yb). Similarly, it shows perfect linearity, C-H···C(π) bonding between the Cp(BIG) rings and out-of-plane bending of the aryl substituents toward the metal. Whereas all other Cp(BIG)2M complexes show large disorder in the metal position, the Sn atom in Cp(BIG)2Sn is perfectly ordered. In contrast, (119)Sn and (151)Eu Mößbauer investigations on the corresponding Cp(BIG)2M metallocenes show that Sn(II) is more dynamic and loosely bound than Eu(II). The large displacement factors in the group 2 and especially in the lanthanide(II) metallocenes Cp(BIG)2M can be explained by static metal disorder in a plane parallel to the Cp(BIG) rings. Despite parallel Cp(BIG) rings, these metallocenes have a nonlinear Cpcenter-M-Cpcenter geometry. This is explained by an ionic model in which metal atoms are polarized by the negatively charged Cp rings. The extent of nonlinearity is in line with trends found in M(2+) ion polarizabilities. The range of known calculated dipole polarizabilities at the Douglas-Kroll CCSD(T) level was extended with values (atomic units) for Sn(2+) 15.35, Sm(2+)(4f(6) (7)F) 9.82, Eu(2+)(4f(7) (8)S) 8.99, and Yb(2+)(4f(14) (1)S) 6.55. This polarizability model cannot be applied to predominantly covalently bound Cp(BIG)2Sn, which shows a perfectly ordered structure. The bent geometry of Cp*2Sn should therefore not be explained by metal polarizability but is due to van der Waals Cp*···Cp* attraction and (to some extent) to a small p-character component in the Sn lone pair.

  14. Big Domains Are Novel Ca2+-Binding Modules: Evidences from Big Domains of Leptospira Immunoglobulin-Like (Lig) Proteins

    PubMed Central

    Palaniappan, Raghavan U. M.; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P.; Sharma, Yogendra; Chang, Yung-Fu

    2010-01-01

    Background Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca2+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca2+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. Principal Findings We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9th (Lig A9) and 10th repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca2+ with dissociation constants of 2–4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. Conclusions We demonstrate that the Lig are Ca2+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca2+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca2+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca2+ binding. PMID:21206924

  15. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of Big Data analytics in healthcare. This is because, the usability studies have considered only qualitative approach which describes potential benefits but does not take into account the quantitative study. Also, majority of the studies were from developed countries which brings out the need for promotion of research on Healthcare Big Data analytics in developing countries. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Extending Big-Five Theory into Childhood: A Preliminary Investigation into the Relationship between Big-Five Personality Traits and Behavior Problems in Children.

    ERIC Educational Resources Information Center

    Ehrler, David J.; McGhee, Ron L.; Evans, J. Gary

    1999-01-01

    Investigation conducted to link Big-Five personality traits with behavior problems identified in childhood. Results show distinct patterns of behavior problems associated with various personality characteristics. Preliminary data indicate that identifying Big-Five personality trait patterns may be a useful dimension of assessment for understanding…

  17. Five Big Ideas

    ERIC Educational Resources Information Center

    Morgan, Debbie

    2012-01-01

    Designing quality continuing professional development (CPD) for those teaching mathematics in primary schools is a challenge. If the CPD is to be built on the scaffold of five big ideas in mathematics, what might be these five big ideas? Might it just be a case of, if you tell me your five big ideas, then I'll tell you mine? Here, there is…

  18. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ...--Cloud Computing and Big Data Forum and Workshop AGENCY: National Institute of Standards and Technology... Standards and Technology (NIST) announces a Cloud Computing and Big Data Forum and Workshop to be held on... followed by a one-day hands-on workshop. The NIST Cloud Computing and Big Data Forum and Workshop will...

  19. Simulation Experiments: Better Data, Not Just Big Data

    DTIC Science & Technology

    2014-12-01

    Modeling and Computer Simulation 22 (4): 20:1–20:17. Hogan, Joe 2014, June 9. “So Far, Big Data is Small Potatoes ”. Scientific American Blog Network...Available via http://blogs.scientificamerican.com/cross-check/2014/06/09/so-far- big-data-is-small- potatoes /. IBM. 2014. “Big Data at the Speed of Business

  20. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Treesearch

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  1. 50 CFR 86.11 - What does the national BIG Program do?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false What does the national BIG Program do? 86.11 Section 86.11 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE... GRANT (BIG) PROGRAM General Information About the Grant Program § 86.11 What does the national BIG...

  2. Beyond the Bells and Whistles: Technology Skills for a Purpose.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.

    2001-01-01

    Discusses the goal of K-12 education to have students learn to use technology, defines computer literacy, and describes the Big6 process model that helps solve information problems. Highlights include examples of technology in Big6 contexts, Big6 and the Internet, and the Big6 as a conceptual framework for meaningful technology use. (LRW)

  3. The Big Six Information Skills as a Metacognitive Scaffold: A Case Study.

    ERIC Educational Resources Information Center

    Wolf, Sara; Brush, Thomas; Saye, John

    2003-01-01

    Discussion of the Big Six information skills model focuses on a case study that examines the effect of Big6 on a class of eighth-grade students doing research on the African-American Civil Rights movement. Topics include information problem solving; metacognition; scaffolding; and Big6 as a metacognitive scaffold. (Author/LRW)

  4. Teaching Information & Technology Skills: The Big6[TM] in Secondary Schools.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This companion volume to a previous work focusing on the Big6 Approach in elementary schools provides secondary school classroom teachers, teacher-librarians, and technology teachers with the background and tools necessary to implement an integrated Big6 program. The first part of this book explains the Big6 approach and the rationale behind it.…

  5. Technology for a Purpose: Technology for Information Problem-Solving with the Big6[R].

    ERIC Educational Resources Information Center

    Eisenberg, Mike B

    2003-01-01

    Explains the Big6 model of information problem solving as a conceptual framework for learning and teaching information and technology skills. Highlights include information skills; examples of integrating technology in Big6 contexts; and the Big6 and the Internet, including email, listservs, chat, Web browsers, search engines, portals, Web…

  6. A Proposed Concentration Curriculum Design for Big Data Analytics for Information Systems Students

    ERIC Educational Resources Information Center

    Molluzzo, John C.; Lawler, James P.

    2015-01-01

    Big Data is becoming a critical component of the Information Systems curriculum. Educators are enhancing gradually the concentration curriculum for Big Data in schools of computer science and information systems. This paper proposes a creative curriculum design for Big Data Analytics for a program at a major metropolitan university. The design…

  7. The Big6 Collection: The Best of the Big6 Newsletter.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    The Big6 is a complete approach to implementing meaningful learning and teaching of information and technology skills, essential for 21st century living. Including in-depth articles, practical tips, and explanations, this book offers a varied range of material about students and teachers, the Big6, and curriculum. The book is divided into 10 main…

  8. ADEC - The Astrophysics Data Centers Executive Council

    Science.gov Websites

    | Magical Feet | MagicalFeet | Only Swallows | OnlySwallows | Jake Malone | JakeMalone | Milf Ex Gf Mobile | MilfExGfMobile | Big Tits Round Asses Mobile | BigTitsRoundAssesMobile | Fuck Mature | FuckMature | Double Team Balls | BabyGotBalls | Out of the Family | OutoftheFamily | Big Mouthfuls Mobile | BigMouthfulsMobile

  9. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  10. 75 FR 22626 - Notice of Lodging of Consent Decree With Big River Zinc Corporation Providing for Civil Penalties...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-29

    ... DEPARTMENT OF JUSTICE Notice of Lodging of Consent Decree With Big River Zinc Corporation... April 15, 2009, a proposed Consent Decree with Big River Zinc Corporation (``BRZ'') providing for civil penalties and injunctive Relief under the Clean Air Act in United States v. Big River Zinc Corp., Civil...

  11. Adding Big Data Analytics to GCSS-MC

    DTIC Science & Technology

    2014-09-30

    TERMS Big Data , Hadoop , MapReduce, GCSS-MC 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY...10 2.5 Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 The Experiment Design 23 3.1 Why Add a Big Data Element...23 3.2 Adding a Big Data Element to GCSS-MC . . . . . . . . . . . . . . 24 3.3 Building a Hadoop Cluster

  12. 'Big data' in pharmaceutical science: challenges and opportunities.

    PubMed

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  13. [Contemplation on the application of big data in clinical medicine].

    PubMed

    Lian, Lei

    2015-01-01

    Medicine is another area where big data is being used. The link between clinical treatment and outcome is the key step when applying big data in medicine. In the era of big data, it is critical to collect complete outcome data. Patient follow-up, comprehensive integration of data resources, quality control and standardized data management are the predominant approaches to avoid missing data and data island. Therefore, establishment of systemic patients follow-up protocol and prospective data management strategy are the important aspects of big data in medicine.

  14. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  15. Gender Differences in Personality across the Ten Aspects of the Big Five.

    PubMed

    Weisberg, Yanna J; Deyoung, Colin G; Hirsh, Jacob B

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level.

  16. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    PubMed

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  17. Gender Differences in Personality across the Ten Aspects of the Big Five

    PubMed Central

    Weisberg, Yanna J.; DeYoung, Colin G.; Hirsh, Jacob B.

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level. PMID:21866227

  18. A Meta-Analysis of the Reliability of Free and For-Pay Big Five Scales.

    PubMed

    Hamby, Tyler; Taylor, Wyn; Snowden, Audrey K; Peterson, Robert A

    2016-01-01

    The present study meta-analytically compared coefficient alpha reliabilities reported for free and for-pay Big Five scales. We collected 288 studies from five previous meta-analyses of Big Five traits and harvested 1,317 alphas from these studies. We found that free and for-pay scales measuring Big Five traits possessed comparable reliabilities. However, after we controlled for the numbers of items in the scales with the Spearman-Brown formula, we found that free scales possessed significantly higher alpha coefficients than for-pay scales for each of the Big Five traits. Thus, the study offers initial evidence that Big Five scales that are free more efficiently measure these traits for research purposes than do for-pay scales.

  19. Adult Literacy and Technology Newsletter. Vol. 3, Nos. 1-4.

    ERIC Educational Resources Information Center

    Gueble, Ed, Ed.

    1989-01-01

    This document consists of four issues of a newsletter focused on the spectrum of technology use in literacy instruction. The first issue contains the following articles: "Five 'Big' Systems and One 'Little' Option" (Weisberg); "Computer Use Patterns at Blackfeet Community College" (Hill); "Software Review: Educational Activities' Science Series"…

  20. Potential Solution of a Hardware-Software System V-Cluster for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Morra, G.; Tufo, H.; Yuen, D. A.; Brown, J.; Zihao, S.

    2017-12-01

    Today it cannot be denied that the Big Data revolution is taking place and is replacing HPC and numerical simulation as the main driver in society. Outside the immediate scientific arena, the Big Data market encompass much more than the AGU. There are many sectors in society that Big Data can ably serve, such as governments finances, hospitals, tourism, and, last by not least, scientific and engineering problems. In many countries, education has not kept pace with the demands from students outside computer science to get into Big Data science. Ultimate Vision (UV) in Beijing attempts to address this need in China by focusing part of our energy on education and training outside the immediate university environment. UV plans a strategy to maximize profits in our beginning. Therefore, we will focus on growing markets such as provincial governments, medical sectors, mass media, and education. And will not address issues such as performance for scientific collaboration, such as seismic networks, where the market share and profits are small by comparison. We have developed a software-hardware system, called V-Cluster, built with the latest NVIDIA GPUs and Intel CPUs with ample amounts of RAM (over couple of Tbytes) and local storage. We have put in an internal network with high bandwidth (over 100 Gbits/sec) and each node of V-Cluster can run at around 40 Tflops. Our system can scale linearly with the number of codes. Our main strength in data analytics is the use of graph-computing paradigm for optimizing the transfer rate in collaborative efforts. We focus in training and education with our clients in order to gain experience in learning about new applications. We will present the philosophy of this second generation of our Data Analytic system, whose costs fall far below those offered elsewhere.

  1. 76 FR 29647 - Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ...-AA00 Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC AGENCY: Coast Guard... for the ``Big Rock Blue Marlin Air Show,'' an aerial demonstration to be held over the waters of Bogue... notice of proposed rulemaking (NPRM) entitled Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound...

  2. A SWOT Analysis of Big Data

    ERIC Educational Resources Information Center

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  3. The Role of Gender in Youth Mentoring Relationship Formation and Duration

    ERIC Educational Resources Information Center

    Rhodes, Jean; Lowe, Sarah R.; Litchfield, Leon; Walsh-Samp, Kathy

    2008-01-01

    The role of gender in shaping the course and quality of adult-youth mentoring relationships was examined. The study drew on data from a large, random assignment evaluation of Big Brothers Big Sisters of America (BBSA) programs [Grossman, J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the Big Brothers Big Sisters program.…

  4. West Virginia's big trees: setting the record straight

    Treesearch

    Melissa Thomas-Van Gundy; Robert Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  5. Development and Validation of Big Four Personality Scales for the Schedule for Nonadaptive and Adaptive Personality-Second Edition (SNAP-2)

    ERIC Educational Resources Information Center

    Calabrese, William R.; Rudick, Monica M.; Simms, Leonard J.; Clark, Lee Anna

    2012-01-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)--such as the Big Three, Big Four, and Big Five trait models--have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality…

  6. A survey of big data research

    PubMed Central

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  7. Analysis of some seismic expressions of Big Injun sandstone and its adjacent interval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiangdong, Zou; Wilson, T.A.; Donaldson, A.C.

    1991-08-01

    The Big Injun sandstone is an important oil and gas reservoir in western West Virginia. The pre-Greenbrier unconformity has complicated correlations, and hydrocarbon explorationists commonly have misidentified the Big Injun in the absence of a regional stratigraphic study. Paleogeologic maps on this unconformity show the West Virginia dome, with the Price/Pocono units truncated resulting in pinch-outs of different sandstones against the overlying Big Lime (Greenbrier Limestone). Drillers have named the first sandstone below the Big Lime as Big Injun, and miscorrelated the real Big Injun with Squaw, upper Weir, and even the Berea sandstone. In this report, an 8-mi (13-km)more » seismic section extending from Kanawha to Clay counties was interpreted. The study area is near the pinch-out of the Big Injun sandstone. A stratigraphic cross section was constructed from gamma-ray logs for comparison with the seismic interpretation. The modeling and interpretation of the seismic section recognized the relief on the unconformity and the ability to determine facies changes, too. Both geophysical wireline and seismic data can be used for detailed stratigraphic analysis within the Granny Creek oil field of Clay and Roane countries.« less

  8. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  9. What’s So Different about Big Data?. A Primer for Clinicians Trained to Think Epidemiologically

    PubMed Central

    Liu, Vincent

    2014-01-01

    The Big Data movement in computer science has brought dramatic changes in what counts as data, how those data are analyzed, and what can be done with those data. Although increasingly pervasive in the business world, it has only recently begun to influence clinical research and practice. As Big Data draws from different intellectual traditions than clinical epidemiology, the ideas may be less familiar to practicing clinicians. There is an increasing role of Big Data in health care, and it has tremendous potential. This Demystifying Data Seminar identifies four main strands in Big Data relevant to health care. The first is the inclusion of many new kinds of data elements into clinical research and operations, in a volume not previously routinely used. Second, Big Data asks different kinds of questions of data and emphasizes the usefulness of analyses that are explicitly associational but not causal. Third, Big Data brings new analytic approaches to bear on these questions. And fourth, Big Data embodies a new set of aspirations for a breaking down of distinctions between research data and operational data and their merging into a continuously learning health system. PMID:25102315

  10. What's so different about big data?. A primer for clinicians trained to think epidemiologically.

    PubMed

    Iwashyna, Theodore J; Liu, Vincent

    2014-09-01

    The Big Data movement in computer science has brought dramatic changes in what counts as data, how those data are analyzed, and what can be done with those data. Although increasingly pervasive in the business world, it has only recently begun to influence clinical research and practice. As Big Data draws from different intellectual traditions than clinical epidemiology, the ideas may be less familiar to practicing clinicians. There is an increasing role of Big Data in health care, and it has tremendous potential. This Demystifying Data Seminar identifies four main strands in Big Data relevant to health care. The first is the inclusion of many new kinds of data elements into clinical research and operations, in a volume not previously routinely used. Second, Big Data asks different kinds of questions of data and emphasizes the usefulness of analyses that are explicitly associational but not causal. Third, Big Data brings new analytic approaches to bear on these questions. And fourth, Big Data embodies a new set of aspirations for a breaking down of distinctions between research data and operational data and their merging into a continuously learning health system.

  11. Challenges of Big Data Analysis.

    PubMed

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  12. Little ice bodies, huge ice lands, and the up-going of the big water body

    NASA Astrophysics Data System (ADS)

    Ultee, E.; Bassis, J. N.

    2017-12-01

    Ice moving out of the huge ice lands causes the big water body to go up. That can cause bad things to happen in places close to the big water body - the land might even disappear! If that happens, people living close to the big water body might lose their homes. Knowing how much ice will come out of the huge ice lands, and when, can help the world plan for the up-going of the big water body. We study the huge ice land closest to us. All around the edge of that huge ice land, there are smaller ice bodies that control how much ice makes it into the big water body. Most ways of studying the huge ice land with computers struggle to tell the computer about those little ice bodies, but we have found a new way. We will talk about our way of studying little ice bodies and how their moving brings about up-going of the big water.

  13. Big data uncertainties.

    PubMed

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  14. Challenges of Big Data Analysis

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-01-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469

  15. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  16. Entering the ‘big data’ era in medicinal chemistry: molecular promiscuity analysis revisited

    PubMed Central

    Hu, Ye; Bajorath, Jürgen

    2017-01-01

    The ‘big data’ concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate. PMID:28670471

  17. The measurement equivalence of Big Five factor markers for persons with different levels of education.

    PubMed

    Rammstedt, Beatrice; Goldberg, Lewis R; Borg, Ingwer

    2010-02-01

    Previous findings suggest that the Big-Five factor structure is not guaranteed in samples with lower educational levels. The present study investigates the Big-Five factor structure in two large samples representative of the German adult population. In both samples, the Big-Five factor structure emerged only in a blurry way at lower educational levels, whereas for highly educated persons it emerged with textbook-like clarity. Because well-educated persons are most comparable to the usual subjects of psychological research, it might be asked if the Big Five are limited to such persons. Our data contradict this conclusion. There are strong individual differences in acquiescence response tendencies among less highly educated persons. After controlling for this bias the Big-Five model holds at all educational levels.

  18. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry.

    PubMed

    Tetko, Igor V; Engkvist, Ola; Koch, Uwe; Reymond, Jean-Louis; Chen, Hongming

    2016-12-01

    The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for "Big Data" in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the "Big Data" using advanced machine-learning methods, and their applications in polypharmacology prediction and target de-convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi-party or multi-party data sharing. Data sharing is important in the context of the recent trend of "open innovation" in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so-called "precompetitive" collaboration between pharma companies. At the end we highlight the importance of education in "Big Data" for further progress of this area. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  19. References for Haplotype Imputation in the Big Data Era

    PubMed Central

    Li, Wenzhi; Xu, Wei; Li, Qiling; Ma, Li; Song, Qing

    2016-01-01

    Imputation is a powerful in silico approach to fill in those missing values in the big datasets. This process requires a reference panel, which is a collection of big data from which the missing information can be extracted and imputed. Haplotype imputation requires ethnicity-matched references; a mismatched reference panel will significantly reduce the quality of imputation. However, currently existing big datasets cover only a small number of ethnicities, there is a lack of ethnicity-matched references for many ethnic populations in the world, which has hampered the data imputation of haplotypes and its downstream applications. To solve this issue, several approaches have been proposed and explored, including the mixed reference panel, the internal reference panel and genotype-converted reference panel. This review article provides the information and comparison between these approaches. Increasing evidence showed that not just one or two genetic elements dictate the gene activity and functions; instead, cis-interactions of multiple elements dictate gene activity. Cis-interactions require the interacting elements to be on the same chromosome molecule, therefore, haplotype analysis is essential for the investigation of cis-interactions among multiple genetic variants at different loci, and appears to be especially important for studying the common diseases. It will be valuable in a wide spectrum of applications from academic research, to clinical diagnosis, prevention, treatment, and pharmaceutical industry. PMID:27274952

  20. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry

    PubMed Central

    Engkvist, Ola; Koch, Uwe; Reymond, Jean‐Louis; Chen, Hongming

    2016-01-01

    Abstract The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for “Big Data” in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the “Big Data” using advanced machine‐learning methods, and their applications in polypharmacology prediction and target de‐convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi‐party or multi‐party data sharing. Data sharing is important in the context of the recent trend of “open innovation” in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so‐called “precompetitive” collaboration between pharma companies. At the end we highlight the importance of education in “Big Data” for further progress of this area. PMID:27464907

  1. Big data in fashion industry

    NASA Astrophysics Data System (ADS)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  2. Real-Time Information Extraction from Big Data

    DTIC Science & Technology

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  3. The big five factors of personality and their relationship to personality disorders.

    PubMed

    Dyce, J A

    1997-10-01

    Articles examining the relationship between the Big Five factors of personality and personality disorders (PDs) are reviewed. A survey of these studies indicates that there is some agreement regarding the relationship between the Big Five and PDs. However, the level of agreement varies and may be a function of instrumentation, the method of report, or how data have been analyzed. Future research should consider the role of peer-ratings, examine the relationship between PDs and the first-order factors of the Big Five, consider dimensions over and above the Big Five as predictors of PDs.

  4. Urgent Call for Nursing Big Data.

    PubMed

    Delaney, Connie W

    2016-01-01

    The purpose of this panel is to expand internationally a National Action Plan for sharable and comparable nursing data for quality improvement and big data science. There is an urgent need to assure that nursing has sharable and comparable data for quality improvement and big data science. A national collaborative - Nursing Knowledge and Big Data Science includes multi-stakeholder groups focused on a National Action Plan toward implementing and using sharable and comparable nursing big data. Panelists will share accomplishments and future plans with an eye toward international collaboration. This presentation is suitable for any audience attending the NI2016 conference.

  5. Big data analytics in healthcare: promise and potential.

    PubMed

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  6. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    ERIC Educational Resources Information Center

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  7. 76 FR 18672 - Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-05

    ...-AA00 Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC AGENCY: Coast Guard... Safety Zone for the ``Big Rock Blue Marlin Air Show'', an aerial demonstration to be held over the waters... Register. Basis and Purpose On June 11, 2011 from 7 p.m. to 8 p.m., the Big Rock Blue Marlin Tournament...

  8. 75 FR 73981 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    .... 0910131362-0087-02] RIN 0648-XA066 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2010 total allowable catch (TAC) of big skate in the Central Regulatory Area of the...

  9. Female "Big Fish" Swimming against the Tide: The "Big-Fish-Little-Pond Effect" and Gender-Ratio in Special Gifted Classes

    ERIC Educational Resources Information Center

    Preckel, Franzis; Zeidner, Moshe; Goetz, Thomas; Schleyer, Esther Jane

    2008-01-01

    This study takes a second look at the "big-fish-little-pond effect" (BFLPE) on a national sample of 769 gifted Israeli students (32% female) previously investigated by Zeidner and Schleyer (Zeidner, M., & Schleyer, E. J., (1999a). "The big-fish-little-pond effect for academic self-concept, test anxiety, and school grades in…

  10. The Big6: Not Just for Kids! Introduction to the Big6: Information Problem-Solving for Upper High School, College-Age, and Adult Students.

    ERIC Educational Resources Information Center

    Eisenberg, Mike; Spitzer, Kathy

    1998-01-01

    Explains the Big6 approach to information problem-solving based on exercises that were developed for college or upper high school students that can be completed during class sessions. Two of the exercises relate to personal information problems, and one relates Big6 skill areas to course assignments. (LRW)

  11. Infrastructure for Big Data in the Intensive Care Unit.

    PubMed

    Zelechower, Javier; Astudillo, José; Traversaro, Francisco; Redelico, Francisco; Luna, Daniel; Quiros, Fernan; San Roman, Eduardo; Risk, Marcelo

    2017-01-01

    The Big Data paradigm can be applied in intensive care unit, in order to improve the treatment of the patients, with the aim of customized decisions. This poster is about the infrastructure necessary to built a Big Data system for the ICU. Together with the infrastructure, the conformation of a multidisciplinary team is essential to develop Big Data to use in critical care medicine.

  12. Vertebrate richness and biogeography in the Big Thicket of Texas

    Treesearch

    Michael H MacRoberts; Barbara R. MacRoberts; D. Craig Rudolph

    2010-01-01

    The Big Thicket of Texas has been described as rich in species and a “crossroads:” a place where organisms from many different regions meet. We examine the species richness and regional affiliations of Big Thicket vertebrates. We found that the Big Thicket is neither exceptionally rich in vertebrates nor is it a crossroads for vertebrates. Its vertebrate fauna is...

  13. The Folly of the Big Idea: How a Liberal Arts Education Puts Fads in Perspective

    ERIC Educational Resources Information Center

    Senechal, Diana

    2013-01-01

    America was made by and for big ideas. Insofar as big ideas have shaped it, it is ever on the verge of hyperbole and dream. Today's big ideas come with an air of celebrity and accessibility; they glitter with glamour but demand little of the Americans. While they have many manifestations, people see them epitomized in TEDTalks. TED (which stands…

  14. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  15. When Big Ice Turns Into Water It Matters For Houses, Stores And Schools All Over

    NASA Astrophysics Data System (ADS)

    Bell, R. E.

    2017-12-01

    When ice in my glass turns to water it is not bad but when the big ice at the top and bottom of the world turns into water it is not good. This new water makes many houses, stores and schools wet. It is really bad during when the wind is strong and the rain is hard. New old ice water gets all over the place. We can not get to work or school or home. We go to the big ice at the top and bottom of the world to see if it will turn to water soon and make more houses wet. We fly over the big ice to see how it is doing. Most of the big ice sits on rock. Around the edge of the big sitting on rock ice, is really low ice that rides on top of the water. This really low ice slows down the big rock ice turning into water. If the really low ice cracks up and turns into little pieces of ice, the big rock ice will make more houses wet. We look to see if there is new water in the cracks. Water in the cracks is bad as it hurts the big rock ice. Water in the cracks on the really low ice will turn the low ice into many little pieces of ice. Then the big rock ice will turn to water. That is water in cracks is bad for the houses, schools and businesses. If water moves off the really low ice, it does not stay in the cracks. This is better for the really low ice. This is better for the big rock ice. We took pictures of the really low ice and saw water leaving. The water was not staying in the cracks. Water leaving the really low ice might be good for houses, schools and stores.

  16. Plasma endothelin-1 and big endothelin-1 levels in women with pre-eclampsia.

    PubMed

    Sudo, N; Kamoi, K; Ishibashi, M; Yamaji, T

    1993-08-01

    To examine a possible role for endothelin-1 (ET-1) and conversion of big ET-1 to ET-1 in the pathophysiology of pre-eclampsia, we measured plasma levels of ET-1 and big ET-1 in 16 women with pre-eclampsia in the third trimester and compared them with those in 11 age-matched normotensive pregnant women and in 10 age-matched pregnant women with chronic hypertension in the third trimester. The plasma concentrations of ET-1 and big ET-1 in the normotensive pregnant women were significantly lower than those in 16 non-pregnant women with a higher molar ratio of big ET-1 to ET-1 in the former group. The plasma concentrations of ET-1 and big ET-1 in the women with pre-eclampsia, on the other hand, were significantly higher than those in the normotensive pregnant women and the molar ratio of big ET-1 to ET-1 in the former group was less than that in the latter group. In sharp contrast, plasma ET-1 and big ET-1 levels in the pregnant women with chronic hypertension were not significantly different from those in the normotensive pregnant women. When examined after delivery, elevated plasma ET-1 and big ET-1 in the women with pre-eclampsia declined, with restoration of normal blood pressure, to the levels in the normotensive women after parturition. There were no significant differences of the levels of ET-1 and big ET-1 in umbilical venous plasma and simultaneously drawn maternal plasma at cesarean section between normotensive pregnant women and women with pre-eclampsia, respectively.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Does loop quantum cosmology replace the big rip singularity by a non-singular bounce?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haro, Jaume de, E-mail: jaime.haro@upc.edu

    It is stated that holonomy corrections in loop quantum cosmology introduce a modification in Friedmann's equation which prevent the big rip singularity. Recently in [1] it has been proved that this modified Friedmann equation is obtained in an inconsistent way, what means that the results deduced from it, in particular the big rip singularity avoidance, are not justified. The problem is that holonomy corrections modify the gravitational part of the Hamiltonian of the system leading, after Legendre's transformation, to a non covariant Lagrangian which is in contradiction with one of the main principles of General Relativity. A more consistent waymore » to deal with the big rip singularity avoidance is to disregard modification in the gravitational part of the Hamiltonian, and only consider inverse volume effects [2]. In this case we will see that, not like the big bang singularity, the big rip singularity survives in loop quantum cosmology. Another way to deal with the big rip avoidance is to take into account geometric quantum effects given by the the Wheeler-De Witt equation. In that case, even though the wave packets spread, the expectation values satisfy the same equations as their classical analogues. Then, following the viewpoint adopted in loop quantum cosmology, one can conclude that the big rip singularity survives when one takes into account these quantum effects. However, the spreading of the wave packets prevents the recover of the semiclassical time, and thus, one might conclude that the classical evolution of the universe come to and end before the big rip is reached. This is not conclusive because. as we will see, it always exists other external times that allows us to define the classical and quantum evolution of the universe up to the big rip singularity.« less

  18. Using Reactive Transport Modeling to Understand Formation of the Stimson Sedimentary Unit and Altered Fracture Zones at Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    Hausrath, E. M.; Ming, D. W.; Peretyazhko, T.; Rampe, E. B.

    2017-01-01

    Water flowing through sediments at Gale Crater, Mars created environments that were likely habitable, and sampled basin-wide hydrological systems. However, many questions remain about these environments and the fluids that generated them. Measurements taken by the Mars Science Laboratory Curiosity of multiple fracture zones can help constrain the environments that formed them because they can be compared to nearby associated parent material (Figure 1). For example, measurements of altered fracture zones from the target Greenhorn in the Stimson sandstone can be compared to parent material measured in the nearby Big Sky target, allowing constraints to be placed on the alteration conditions that formed the Greenhorn target from the Big Sky target. Similarly, CheMin measurements of the powdered < 150 micron fraction from the drillhole at Big Sky and sample from the Rocknest eolian deposit indicate that the mineralogies are strikingly similar. The main differences are the presence of olivine in the Rocknest eolian deposit, which is absent in the Big Sky target, and the presence of far more abundant Fe oxides in the Big Sky target. Quantifying the changes between the Big Sky target and the Rocknest eolian deposit can therefore help us understand the diagenetic changes that occurred forming the Stimson sedimentary unit. In order to interpret these aqueous changes, we performed reactive transport modeling of 1) the formation of the Big Sky target from a Rocknest eolian deposit-like parent material, and 2) the formation of the Greenhorn target from the Big Sky target. This work allows us to test the relationships between the targets and the characteristics of the aqueous conditions that formed the Greenhorn target from the Big Sky target, and the Big Sky target from a Rocknest eolian deposit-like parent material.

  19. A peek into the future of radiology using big data applications.

    PubMed

    Kharat, Amit T; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs - Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs - Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, "big data should not become "dump data" due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized and individualized healthcare.

  20. Big Joe Capsule Assembly Activities

    NASA Image and Video Library

    1959-08-01

    Big Joe Capsule Assembly Activities in 1959 at NASA Glenn Research Center (formerly NASA Lewis). Big Joe was an Atlas missile that successfully launched a boilerplate model of the Mercury capsule on September 9, 1959.

  1. Mind the Scales: Harnessing Spatial Big Data for Infectious Disease Surveillance and Inference

    PubMed Central

    Lee, Elizabeth C.; Asher, Jason M.; Goldlust, Sandra; Kraemer, John D.; Lawson, Andrew B.; Bansal, Shweta

    2016-01-01

    Spatial big data have the velocity, volume, and variety of big data sources and contain additional geographic information. Digital data sources, such as medical claims, mobile phone call data records, and geographically tagged tweets, have entered infectious diseases epidemiology as novel sources of data to complement traditional infectious disease surveillance. In this work, we provide examples of how spatial big data have been used thus far in epidemiological analyses and describe opportunities for these sources to improve disease-mitigation strategies and public health coordination. In addition, we consider the technical, practical, and ethical challenges with the use of spatial big data in infectious disease surveillance and inference. Finally, we discuss the implications of the rising use of spatial big data in epidemiology to health risk communication, and public health policy recommendations and coordination across scales. PMID:28830109

  2. BIG: a large-scale data integration tool for renal physiology.

    PubMed

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  3. The BIG (brain injury guidelines) project: defining the management of traumatic brain injury by acute care surgeons.

    PubMed

    Joseph, Bellal; Friese, Randall S; Sadoun, Moutamn; Aziz, Hassan; Kulvatunyou, Narong; Pandit, Viraj; Wynne, Julie; Tang, Andrew; O'Keeffe, Terence; Rhee, Peter

    2014-04-01

    It is becoming a standard practice that any "positive" identification of a radiographic intracranial injury requires transfer of the patient to a trauma center for observation and repeat head computed tomography (RHCT). The purpose of this study was to define guidelines-based on each patient's history, physical examination, and initial head CT findings-regarding which patients require a period of observation, RHCT, or neurosurgical consultation. In our retrospective cohort analysis, we reviewed the records of 3,803 blunt traumatic brain injury patients during a 4-year period. We classified patients according to neurologic examination results, use of intoxicants, anticoagulation status, and initial head CT findings. We then developed brain injury guidelines (BIG) based on the individual patient's need for observation or hospitalization, RHCT, or neurosurgical consultation. A total of 1,232 patients had an abnormal head CT finding. In the BIG 1 category, no patients worsened clinically or radiographically or required any intervention. BIG 2 category had radiographic worsening in 2.6% of the patients. All patients who required neurosurgical intervention (13%) were in BIG 3. There was excellent agreement between assigned BIG and verified BIG. κ statistic is equal to 0.98. We have proposed BIG based on patient's history, neurologic examination, and findings of initial head CT scan. These guidelines must be used as supplement to good clinical examination while managing patients with traumatic brain injury. Prospective validation of the BIG is warranted before its widespread implementation. Epidemiologic study, level III.

  4. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  5. 78 FR 27863 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    .... 120918468-3111-02] RIN 0648-XC673 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2013 total allowable catch of big skate in the Central Regulatory Area of the GOA has...

  6. 77 FR 75399 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-20

    .... 111207737-2141-02] RIN 0648-XC405 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2012 total allowable catch of big skate in the Central Regulatory Area of the GOA has...

  7. THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER

    DTIC Science & Technology

    2016-06-01

    AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY THE FASTEST OODA LOOP : THE IMPLICATIONS OF BIG DATA FOR AIR POWER by Aaron J. Dove, Maj, USAF A...Use of Big Data Thus Far..............................................................16 The Big Data Boost To The OODA Loop ...processed with enough accuracy that it required minimal to no human or man-in-the loop vetting of the information through Command and Control (C2

  8. LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments

    DTIC Science & Technology

    2015-11-20

    1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming

  9. Big Data in industry

    NASA Astrophysics Data System (ADS)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  10. Engaging the World: Music Education and the Big Ideas

    ERIC Educational Resources Information Center

    Richardson, Carol P.

    2007-01-01

    In this paper I address the distance between our practices as music educators and the democratic issues of equity, social justice and social consciousness. I first explore issues of elitism, identity politics, and our natural aversion to change. I then propose several approaches that we as university faculty may take through our curricula and…

  11. Identity Pole: Confronting Issues of Personal and Cultural Meaning

    ERIC Educational Resources Information Center

    Ciminero, Sandra Elser

    2011-01-01

    The purpose of the "Identity Pole" was to explore the big idea of identity. Students would confront issues of personal and cultural meaning, and draw upon interdisciplinary connections for inspiration. The author chose to present totem poles of the Northwest Coast Native Americans/First Nations of Canada, as well as school, state and national…

  12. Online Education: "76 Trombones and a Big Parade."

    ERIC Educational Resources Information Center

    Tillman, J. Jeffrey

    2002-01-01

    Explores whether there are good educational reasons for the popularity of online education, asserting that perhaps the same issues are at stake as are found in "The Music Man"--business at the heart of education. (EV)

  13. AirMSPI PODEX BigSur Terrain Images

    Atmospheric Science Data Center

    2013-12-13

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target (Big Sur, California) 02/03/2013 Terrain-projected   Select ...   Version number   For more information, see the Data Product Specifications (DPS)   ...

  14. 12. INTERIOR OF NORTH END ENCLOSED SCREEN PORCH. DOUBLE FRENCH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. INTERIOR OF NORTH END ENCLOSED SCREEN PORCH. DOUBLE FRENCH DOORS LEAD TO BEDROOM #3. VIEW TO EAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  15. [Chapter 4. Governing Big Data for Health, national and international issues].

    PubMed

    Rial-Sebbag, Emmanuelle

    2017-10-27

    The use of health data is increasingly seen as a central issue for research and also for care. The generation of these data is an added value for the conduct of large-scale studies, it is even considered as an (r) evolution in the methodology of research or even for personalized medicine. Several factors have influenced the acceleration in the use of health data (advances in genetics, technology and diversification of sources) leading to a re-questioning of the legal principles for the protection of health data in both French law and European law. Indeed, first, the massive production of data (Big Data) in the field of health affects the quantity and the quality of the data which consequently reconfigure the tools of protection of private life and on the informational risk. Second, the use of these data is based on existing fundamental principles while raising new challenges for their governance.

  16. Residual corneal stroma in big-bubble deep anterior lamellar keratoplasty: a histological study in eye-bank corneas.

    PubMed

    McKee, Hamish D; Irion, Luciane C D; Carley, Fiona M; Jhanji, Vishal; Brahma, Arun K

    2011-10-01

    To determine if residual corneal stroma remains on the recipient posterior lamella in big-bubble deep anterior lamellar keratoplasty (DALK). Pneumodissection using the big-bubble technique was carried out on eye-bank corneas mounted on an artificial anterior chamber. Samples that had a successful big-bubble formation were sent for histological evaluation to determine if any residual stroma remained on the Descemet membrane (DM). Big-bubble formation was achieved in 32 donor corneas. Two distinct types of big-bubble were seen: the bubble had either a white margin (30 corneas) or a clear margin (two corneas). The posterior lamellae of all the white margin corneas showed residual stroma on DM with a mean central thickness of 7.0 μm (range 2.6-17.4 μm). The clear margin corneas showed no residual stroma on DM. It should no longer be assumed that big-bubble DALK, where the bubble has a white margin, routinely bares DM. True baring of DM may only occur with the less commonly seen clear margin bubble.

  17. Solution structure of the Big domain from Streptococcus pneumoniae reveals a novel Ca2+-binding module

    PubMed Central

    Wang, Tao; Zhang, Jiahai; Zhang, Xuecheng; Xu, Chao; Tu, Xiaoming

    2013-01-01

    Streptococcus pneumoniae is a pathogen causing acute respiratory infection, otitis media and some other severe diseases in human. In this study, the solution structure of a bacterial immunoglobulin-like (Big) domain from a putative S. pneumoniae surface protein SP0498 was determined by NMR spectroscopy. SP0498 Big domain adopts an eight-β-strand barrel-like fold, which is different in some aspects from the two-sheet sandwich-like fold of the canonical Ig-like domains. Intriguingly, we identified that the SP0498 Big domain was a Ca2+ binding domain. The structure of the Big domain is different from those of the well known Ca2+ binding domains, therefore revealing a novel Ca2+-binding module. Furthermore, we identified the critical residues responsible for the binding to Ca2+. We are the first to report the interactions between the Big domain and Ca2+ in terms of structure, suggesting an important role of the Big domain in many essential calcium-dependent cellular processes such as pathogenesis. PMID:23326635

  18. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    PubMed Central

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639

  19. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    PubMed

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  20. Current applications of big data in obstetric anesthesiology.

    PubMed

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  1. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    The superbulky deca-aryleuropocene [Eu(Cp(BIG))2], Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl, was prepared by reaction of [Eu(dmat)2(thf)2], DMAT = 2-Me2N-α-Me3Si-benzyl, with two equivalents of Cp(BIG)H. Recrystallizyation from cold hexane gave the product with a surprisingly bright and efficient orange emission (45% quantum yield). The crystal structure is isomorphic to those of [M(Cp(BIG))2] (M = Sm, Yb, Ca, Ba) and shows the typical distortions that arise from Cp(BIG)⋅⋅⋅Cp(BIG) attraction as well as excessively large displacement parameter for the heavy Eu atom (U(eq) = 0.075). In order to gain information on the true oxidation state of the central metal in superbulky metallocenes [M(Cp(BIG))2] (M = Sm, Eu, Yb), several physical analyses have been applied. Temperature-dependent magnetic susceptibility data of [Yb(Cp(BIG))2] show diamagnetism, indicating stable divalent ytterbium. Temperature-dependent (151)Eu Mössbauer effect spectroscopic examination of [Eu(Cp(BIG))2] was examined over the temperature range 93-215 K and the hyperfine and dynamical properties of the Eu(II) species are discussed in detail. The mean square amplitude of vibration of the Eu atom as a function of temperature was determined and compared to the value extracted from the single-crystal X-ray data at 203 K. The large difference in these two values was ascribed to the presence of static disorder and/or the presence of low-frequency torsional and librational modes in [Eu(Cp(BIG))2]. X-ray absorbance near edge spectroscopy (XANES) showed that all three [Ln(Cp(BIG))2] (Ln = Sm, Eu, Yb) compounds are divalent. The XANES white-line spectra are at 8.3, 7.3, and 7.8 eV, for Sm, Eu, and Yb, respectively, lower than the Ln2O3 standards. No XANES temperature dependence was found from room temperature to 100 K. XANES also showed that the [Ln(Cp(BIG))2] complexes had less trivalent impurity than a [EuI2(thf)x] standard. The complex [Eu(Cp(BIG))2] shows already at room temperature strong orange photoluminescence (quantum yield: 45 %): excitation at 412 nm (24,270 cm(-1)) gives a symmetrical single band in the emission spectrum at 606 nm (νmax =16495 cm(-1), FWHM: 2090 cm(-1), Stokes-shift: 2140 cm(-1)), which is assigned to a 4f(6)5d(1) → 4f(7) transition of Eu(II). These remarkable values compare well to those for Eu(II)-doped ionic host lattices and are likely caused by the rigidity of the [Eu(Cp(BIG))2] complex. Sharp emission signals, typical for Eu(III), are not visible. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Flexibility in faculty work-life policies at medical schools in the Big Ten conference.

    PubMed

    Welch, Julie L; Wiehe, Sarah E; Palmer-Smith, Victoria; Dankoski, Mary E

    2011-05-01

    Women lag behind men in several key academic indicators, such as advancement, retention, and securing leadership positions. Although reasons for these disparities are multifactorial, policies that do not support work-life integration contribute to the problem. The objective of this descriptive study was to compare the faculty work-life policies among medical schools in the Big Ten conference. Each institution's website was accessed in order to assess its work-life policies in the following areas: maternity leave, paternity leave, adoption leave, extension of probationary period, part-time appointments, part-time benefits (specifically health insurance), child care options, and lactation policy. Institutions were sent requests to validate the online data and supply additional information if needed. Each institution received an overall score and subscale scores for family leave policies and part-time issues. Data were verified by the human resources office at 8 of the 10 schools. Work-life policies varied among Big Ten schools, with total scores between 9.25 and 13.5 (possible score: 0-21; higher scores indicate greater flexibility). Subscores were not consistently high or low within schools. Comparing the flexibility of faculty work-life policies in relation to other schools will help raise awareness of these issues and promote more progressive policies among less progressive schools. Ultimately, flexible policies will lead to greater equity and institutional cultures that are conducive to recruiting, retaining, and advancing diverse faculty.

  3. Society's symbolic order and political trials: toward sacrificing the self for the "Big Other".

    PubMed

    Lubin, Avi

    2005-12-01

    The need to establish a borderline between legitimate and illegitimate political trial is one of the central societal discourses. In this paper the author claims that the issues are complex and that a political trial can remain legitimate as long as it is not dealing with a confrontation with the symbolic order on which the society (and the court itself) is founded and as long as the subject (or action) it is dealing with does not threaten the symbolic order's (or the "Big Other") existence. When the symbolic order's existence is in danger, the court is bound to participate in an act of "sacrifice" that is intended to protect the "order." The author uses Jacques Lacan's psychoanalytic theory of the "Big Other" (and its development to ideological-political terms) in examining three categories of sacrifice. Through these categories the author claims that in extreme cases of confrontation with the existence of the symbolic order, the court cannot remain objective and it would be difficult to justify the trial as legitimate (especially in historical perspective).

  4. Big data management challenges in health research-a literature review.

    PubMed

    Wang, Xiaoming; Williams, Carolyn; Liu, Zhen Hua; Croghan, Joe

    2017-08-07

    Big data management for information centralization (i.e. making data of interest findable) and integration (i.e. making related data connectable) in health research is a defining challenge in biomedical informatics. While essential to create a foundation for knowledge discovery, optimized solutions to deliver high-quality and easy-to-use information resources are not thoroughly explored. In this review, we identify the gaps between current data management approaches and the need for new capacity to manage big data generated in advanced health research. Focusing on these unmet needs and well-recognized problems, we introduce state-of-the-art concepts, approaches and technologies for data management from computing academia and industry to explore improvement solutions. We explain the potential and significance of these advances for biomedical informatics. In addition, we discuss specific issues that have a great impact on technical solutions for developing the next generation of digital products (tools and data) to facilitate the raw-data-to-knowledge process in health research. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  5. Big Data breaking barriers - first steps on a long trail

    NASA Astrophysics Data System (ADS)

    Schade, S.

    2015-04-01

    Most data sets and streams have a geospatial component. Some people even claim that about 80% of all data is related to location. In the era of Big Data this number might even be underestimated, as data sets interrelate and initially non-spatial data becomes indirectly geo-referenced. The optimal treatment of Big Data thus requires advanced methods and technologies for handling the geospatial aspects in data storage, processing, pattern recognition, prediction, visualisation and exploration. On the one hand, our work exploits earth and environmental sciences for existing interoperability standards, and the foundational data structures, algorithms and software that are required to meet these geospatial information handling tasks. On the other hand, we are concerned with the arising needs to combine human analysis capacities (intelligence augmentation) with machine power (artificial intelligence). This paper provides an overview of the emerging landscape and outlines our (Digital Earth) vision for addressing the upcoming issues. We particularly request the projection and re-use of the existing environmental, earth observation and remote sensing expertise in other sectors, i.e. to break the barriers of all of these silos by investigating integrated applications.

  6. Controversies in the Hydrosphere: an iBook exploring current global water issues for middle school classrooms

    NASA Astrophysics Data System (ADS)

    Dufoe, A.; Guertin, L. A.

    2012-12-01

    This project looks to help teachers utilize iPad technology in their classrooms as an instructional tool for Earth system science and connections to the Big Ideas in Earth Science. The project is part of Penn State University's National Science Foundation (NSF) Targeted Math Science Partnership grant, with one goal of the grant to help current middle school teachers across Pennsylvania engage students with significant and complex questions of Earth science. The free Apple software iBooks Author was used to create an electronic book for the iPad, focusing on a variety of controversial issues impacting the hydrosphere. The iBook includes image slideshows, embedded videos, interactive images and quizzes, and critical thinking questions along Bloom's Taxonomic Scale of Learning Objectives. Outlined in the introductory iBook chapters are the Big Ideas of Earth System Science and an overview of Earth's spheres. Since the book targets the hydrosphere, each subsequent chapter focuses on specific water issues, including glacial melts, aquifer depletion, coastal oil pollution, marine debris, and fresh-water chemical contamination. Each chapter is presented in a case study format that highlights the history of the issue, the development and current status of the issue, and some solutions that have been generated. The next section includes critical thinking questions in an open-ended discussion format that focus on the Big Ideas, proposing solutions for rectifying the situation, and/or assignments specifically targeting an idea presented in the case study chapter. Short, comprehensive multiple-choice quizzes are also in each chapter. Throughout the iBook, students are free to watch videos, explore the content and form their own opinions. As a result, this iBook fulfills the grant objective by engaging teachers and students with an innovative technological presentation that incorporates Earth system science with current case studies regarding global water issues.

  7. ARTIST CONCEPT - BIG JOE

    NASA Image and Video Library

    1963-09-01

    S63-19317 (October 1963) --- Pen and ink views of comparative arrangements of several capsules including the existing "Big Joe" design, the compromise "Big Joe" design, and the "Little Joe". All capsule designs are labeled and include dimensions. Photo credit: NASA

  8. Big data analytics to aid developing livable communities.

    DOT National Transportation Integrated Search

    2015-12-31

    In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...

  9. 14. LIVING ROOM INTERIOR SHOWING WEST SIDE AND SOUTH END ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. LIVING ROOM INTERIOR SHOWING WEST SIDE AND SOUTH END DOUBLE FRENCH DOORS, AND FIBERBOARD WALLS. VIEW TO SOUTHWEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  10. 29. BEDROOM #3 INTERIOR SHOWING DOUBLE FRENCH DOORS TO SCREENED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. BEDROOM #3 INTERIOR SHOWING DOUBLE FRENCH DOORS TO SCREENED PORCH AND FIVE-PANELED DOOR TO HALL. VIEW TO WEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  11. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  12. Big Data in the Earth Observing System Data and Information System

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Baynes, Katie; McInerney, Mark

    2016-01-01

    Approaches that are being pursued for the Earth Observing System Data and Information System (EOSDIS) data system to address the challenges of Big Data were presented to the NASA Big Data Task Force. Cloud prototypes are underway to tackle the volume challenge of Big Data. However, advances in computer hardware or cloud won't help (much) with variety. Rather, interoperability standards, conventions, and community engagement are the key to addressing variety.

  13. Visualization of Big Data Through Ship Maintenance Metrics Analysis for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2014-03-01

    BIG DATA THROUGH SHIP MAINTENANCE METRICS ANALYSIS FOR FLEET MAINTENANCE AND REVITALIZATION by Isaac J. Donaldson March 2014 Thesis...March 2014 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISUALIZATION OF BIG DATA THROUGH SHIP MAINTENANCE METRICS...terms of the overall performance of ship maintenance processes is clearly a big data problem. The current process for presenting data on the more than

  14. Portable Map-Reduce Utility for MIT SuperCloud Environment

    DTIC Science & Technology

    2015-09-17

    Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is

  15. [Utilization of Big Data in Medicine and Future Outlook].

    PubMed

    Kinosada, Yasutomi; Uematsu, Machiko; Fujiwara, Takuya

    2016-03-01

    "Big data" is a new buzzword. The point is not to be dazzled by the volume of data, but rather to analyze it, and convert it into insights, innovations, and business value. There are also real differences between conventional analytics and big data. In this article, we show some results of big data analysis using open DPC (Diagnosis Procedure Combination) data in areas of the central part of JAPAN: Toyama, Ishikawa, Fukui, Nagano, Gifu, Aichi, Shizuoka, and Mie Prefectures. These 8 prefectures contain 51 medical administration areas called the second medical area. By applying big data analysis techniques such as k-means, hierarchical clustering, and self-organizing maps to DPC data, we can visualize the disease structure and detect similarities or variations among the 51 second medical areas. The combination of a big data analysis technique and open DPC data is a very powerful method to depict real figures on patient distribution in Japan.

  16. BIG: a large-scale data integration tool for renal physiology

    PubMed Central

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya

    2016-01-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488

  17. Commentary: Epidemiology in the era of big data.

    PubMed

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  18. Epidemiology in the Era of Big Data

    PubMed Central

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  19. Mind the Scales: Harnessing Spatial Big Data for Infectious Disease Surveillance and Inference.

    PubMed

    Lee, Elizabeth C; Asher, Jason M; Goldlust, Sandra; Kraemer, John D; Lawson, Andrew B; Bansal, Shweta

    2016-12-01

    Spatial big data have the velocity, volume, and variety of big data sources and contain additional geographic information. Digital data sources, such as medical claims, mobile phone call data records, and geographically tagged tweets, have entered infectious diseases epidemiology as novel sources of data to complement traditional infectious disease surveillance. In this work, we provide examples of how spatial big data have been used thus far in epidemiological analyses and describe opportunities for these sources to improve disease-mitigation strategies and public health coordination. In addition, we consider the technical, practical, and ethical challenges with the use of spatial big data in infectious disease surveillance and inference. Finally, we discuss the implications of the rising use of spatial big data in epidemiology to health risk communication, and public health policy recommendations and coordination across scales. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.

  20. 'Big data' in mental health research: current status and emerging possibilities.

    PubMed

    Stewart, Robert; Davis, Katrina

    2016-08-01

    'Big data' are accumulating in a multitude of domains and offer novel opportunities for research. The role of these resources in mental health investigations remains relatively unexplored, although a number of datasets are in use and supporting a range of projects. We sought to review big data resources and their use in mental health research to characterise applications to date and consider directions for innovation in future. A narrative review. Clear disparities were evident in geographic regions covered and in the disorders and interventions receiving most attention. We discuss the strengths and weaknesses of the use of different types of data and the challenges of big data in general. Current research output from big data is still predominantly determined by the information and resources available and there is a need to reverse the situation so that big data platforms are more driven by the needs of clinical services and service users.

  1. BigData as a Driver for Capacity Building in Astrophysics

    NASA Astrophysics Data System (ADS)

    Shastri, Prajval

    2015-08-01

    Exciting public interest in astrophysics acquires new significance in the era of Big Data. Since Big Data involves advanced technologies of both software and hardware, astrophysics with Big Data has the potential to inspire young minds with diverse inclinations - i.e., not just those attracted to physics but also those pursuing engineering careers. Digital technologies have become steadily cheaper, which can enable expansion of the Big Data user pool considerably, especially to communities that may not yet be in the astrophysics mainstream, but have high potential because of access to thesetechnologies. For success, however, capacity building at the early stages becomes key. The development of on-line pedagogical resources in astrophysics, astrostatistics, data-mining and data visualisation that are designed around the big facilities of the future can be an important effort that drives such capacity building, especially if facilitated by the IAU.

  2. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  3. Research on the influencing factors of financing efficiency of big data industry based on panel data model--Empirical evidence from Guizhou province

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Feng, Yujia

    2018-03-01

    This paper mainly studies the influence factors of financing efficiency of Guizhou big data industry, and selects the financial and macro data of 20 Guizhou big data enterprises from 2010 to 2016. Using the DEA model to obtain the financing efficiency of Guizhou big data enterprises. A panel data model is constructed to select the six macro and micro influencing factors for panel data analysis. The results show that the external economic environment, the turnover rate of the total assets of the enterprises, the increase of operating income, the increase of the revenue per share of each share of the business income have positive impact on the financing efficiency of of the big data industry in Guizhou. The key to improve the financing efficiency of Guizhou big data enterprises is to improve.

  4. Internet Addiction of Young Greek Adults: Psychological Aspects and Information Privacy.

    PubMed

    Grammenos, P; Syrengela, N A; Magkos, E; Tsohou, A

    2017-01-01

    The main goal of this study is to examine the Internet addiction status of Greek young adults, aged from 18 to 25, using Young's Internet Addiction Test (IAT) and self-administered questionnaires. In addition this paper assesses the psychological traits of addicted persons per addiction category, using the big five factor model tool to study the user's personality and analyze the components that lead a person to become Internet addicted. Furthermore, we found an association between addicted people and the five factors from the Big Five Factor Model; i.e., extraversion, agreeableness, conscientiousness, neuroticism, openness to experience. Moreover, this paper discusses information privacy awareness issues related to Internet Addiction treatment.

  5. Beyond simple charts: Design of visualizations for big health data

    PubMed Central

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416

  6. Big Data Usage Patterns in the Health Care Domain: A Use Case Driven Approach Applied to the Assessment of Vaccination Benefits and Risks. Contribution of the IMIA Primary Healthcare Working Group.

    PubMed

    Liyanage, H; de Lusignan, S; Liaw, S-T; Kuziemsky, C E; Mold, F; Krause, P; Fleming, D; Jones, S

    2014-08-15

    Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowdsourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the "internet of things", and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.

  7. Beyond simple charts: Design of visualizations for big health data.

    PubMed

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.

  8. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Big Data Usage Patterns in the Health Care Domain: A Use Case Driven Approach Applied to the Assessment of Vaccination Benefits and Risks

    PubMed Central

    Liyanage, H.; Liaw, S-T.; Kuziemsky, C.; Mold, F.; Krause, P.; Fleming, D.; Jones, S.

    2014-01-01

    Summary Background Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance. PMID:25123718

  10. Hedgehogs and foxes (and a bear)

    NASA Astrophysics Data System (ADS)

    Gibb, Bruce

    2017-02-01

    The chemical universe is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Bruce Gibb reminds us that it's somewhat messy too, and so we succeed by recognizing the limits of our knowledge.

  11. 76 FR 47141 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. [[Page 47142

  12. 12. ENCLOSED SLEEPING PORCH INTERIOR DETAIL SHOWING PULLDOWN STAIRCASE TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. ENCLOSED SLEEPING PORCH INTERIOR DETAIL SHOWING PULL-DOWN STAIRCASE TO ATTIC. VIEW TO SOUTHEAST. - Big Creek Hydroelectric System, Big Creek Town, Operator House, Orchard Avenue south of Huntington Lake Road, Big Creek, Fresno County, CA

  13. 31. HALL INTERIOR SHOWING SINGLE FRENCH DOOR TO NORTH SIDE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. HALL INTERIOR SHOWING SINGLE FRENCH DOOR TO NORTH SIDE SCREENED PORCH, AND TRAP-DOOR ACCESS TO ATTIC. VIEW TO NORTHEAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  14. 17. DINING ROOM INTERIOR SHOWING GROUP OF THREE 1 LIGHT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. DINING ROOM INTERIOR SHOWING GROUP OF THREE 1 LIGHT OVER 1 LIGHT WINDOWS, AND DOORWAY INTO KITCHEN. VIEW TO EAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  15. Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.

    PubMed

    Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan

    2016-07-01

    Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression data from a malaria vaccine trial by big-data-based edge biomarkers from module network rewiring-analysis. The illustrative results show that the identified module biomarkers can accurately distinguish vaccines with or without protection and outperformed previous reported gene signatures in terms of effectiveness and efficiency. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. Breaking BAD: A Data Serving Vision for Big Active Data

    PubMed Central

    Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.

    2017-01-01

    Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377

  17. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  18. Big Data and Analytics in Healthcare.

    PubMed

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  19. Elevation Request Letter to Army - signed December 13, 1991

    EPA Pesticide Factsheets

    A request for review of the decision to issue a Section 404 permit (Loosahatchie RiverBig Creek-25-TD) to the Tennessee Department of Transportation for the extension of the Paul Banett Parkway near Millington, Shelby County, Tennessee.

  20. Using Big Data Analytics to Address Mixtures Exposure

    EPA Science Inventory

    The assessment of chemical mixtures is a complex issue for regulators and health scientists. We propose that assessing chemical co-occurrence patterns and prevalence rates is a relatively simple yet powerful approach in characterizing environmental mixtures and mixtures exposure...

  1. The Storyboard's Big Picture

    NASA Technical Reports Server (NTRS)

    Malloy, Cheryl A.; Cooley, William

    2003-01-01

    At Science Applications International Corporation (SAIC), Cape Canaveral Office, we're using a project management tool that facilitates team communication, keeps our project team focused, streamlines work and identifies potential issues. What did it cost us to install the tool? Almost nothing.

  2. 75 FR 5758 - Bridger-Teton National Forest, Big Piney Ranger District, WY; Piney Creeks Vegetation Treatment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... mechanical treatments and prescribed fire to regenerate healthy aspen and sagebrush and remove conifers... measures needed in addition to those prescribed in the Forest Plan. Preliminary Issues The Forest Service...

  3. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase III

    DTIC Science & Technology

    2015-04-30

    It is a supervised learning method but best for Big Data with low dimensions. It is an approximate inference good for Big Data and Hadoop ...Each process produces large amounts of information ( Big Data ). There is a critical need for automation, validation, and discovery to help acquisition...can inform managers where areas might have higher program risk and how resource and big data management might affect the desired return on investment

  4. Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-01-16

    evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and

  5. Identifying key climate and environmental factors affecting rates of post-fire big sagebrush (Artemisia tridentata) recovery in the northern Columbia Basin, USA

    USGS Publications Warehouse

    Shinneman, Douglas; McIlroy, Susan

    2016-01-01

    Sagebrush steppe of North America is considered highly imperilled, in part owing to increased fire frequency. Sagebrush ecosystems support numerous species, and it is important to understand those factors that affect rates of post-fire sagebrush recovery. We explored recovery of Wyoming big sagebrush (Artemisia tridentata ssp.wyomingensis) and basin big sagebrush (A. tridentata ssp. tridentata) communities following fire in the northern Columbia Basin (Washington, USA). We sampled plots across 16 fires that burned in big sagebrush communities from 5 to 28 years ago, and also sampled nearby unburned locations. Mixed-effects models demonstrated that density of large–mature big sagebrush plants and percentage cover of big sagebrush were higher with time since fire and in plots with more precipitation during the winter immediately following fire, but were lower when precipitation the next winter was higher than average, especially on soils with higher available water supply, and with greater post-fire mortality of mature big sagebrush plants. Bunchgrass cover 5 to 28 years after fire was predicted to be lower with higher cover of both shrubs and non-native herbaceous species, and only slightly higher with time. Post-fire recovery of big sagebrush in the northern Columbia Basin is a slow process that may require several decades on average, but faster recovery rates may occur under specific site and climate conditions.

  6. Empowering Personalized Medicine with Big Data and Semantic Web Technology: Promises, Challenges, and Use Cases.

    PubMed

    Panahiazar, Maryam; Taslimitehrani, Vahid; Jadhav, Ashutosh; Pathak, Jyotishman

    2014-10-01

    In healthcare, big data tools and technologies have the potential to create significant value by improving outcomes while lowering costs for each individual patient. Diagnostic images, genetic test results and biometric information are increasingly generated and stored in electronic health records presenting us with challenges in data that is by nature high volume, variety and velocity, thereby necessitating novel ways to store, manage and process big data. This presents an urgent need to develop new, scalable and expandable big data infrastructure and analytical methods that can enable healthcare providers access knowledge for the individual patient, yielding better decisions and outcomes. In this paper, we briefly discuss the nature of big data and the role of semantic web and data analysis for generating "smart data" which offer actionable information that supports better decision for personalized medicine. In our view, the biggest challenge is to create a system that makes big data robust and smart for healthcare providers and patients that can lead to more effective clinical decision-making, improved health outcomes, and ultimately, managing the healthcare costs. We highlight some of the challenges in using big data and propose the need for a semantic data-driven environment to address them. We illustrate our vision with practical use cases, and discuss a path for empowering personalized medicine using big data and semantic web technology.

  7. A Systematic Review of Techniques and Sources of Big Data in the Healthcare Sector.

    PubMed

    Alonso, Susel Góngora; de la Torre Díez, Isabel; Rodrigues, Joel J P C; Hamrioui, Sofiane; López-Coronado, Miguel

    2017-10-14

    The main objective of this paper is to present a review of existing researches in the literature, referring to Big Data sources and techniques in health sector and to identify which of these techniques are the most used in the prediction of chronic diseases. Academic databases and systems such as IEEE Xplore, Scopus, PubMed and Science Direct were searched, considering the date of publication from 2006 until the present time. Several search criteria were established as 'techniques' OR 'sources' AND 'Big Data' AND 'medicine' OR 'health', 'techniques' AND 'Big Data' AND 'chronic diseases', etc. Selecting the paper considered of interest regarding the description of the techniques and sources of Big Data in healthcare. It found a total of 110 articles on techniques and sources of Big Data on health from which only 32 have been identified as relevant work. Many of the articles show the platforms of Big Data, sources, databases used and identify the techniques most used in the prediction of chronic diseases. From the review of the analyzed research articles, it can be noticed that the sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.

  8. [Big data in medicine and healthcare].

    PubMed

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  9. 5. INTERIOR VIEW OF UPPER LEVEL ROOM OF THE CONTROL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. INTERIOR VIEW OF UPPER LEVEL ROOM OF THE CONTROL HOUSE LOCATED ON THE SOUTH END OF BIG TUJUNGA DAM SHOWING THE CONTROL PANEL. - Big Tujunga Dam, Control House, 809 West Big Tujunga Road, Sunland, Los Angeles County, CA

  10. 27. BEDROOM #2 INTERIOR SHOWING DOUBLE FRENCH DOORS TO SCREENED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. BEDROOM #2 INTERIOR SHOWING DOUBLE FRENCH DOORS TO SCREENED PORCH AND UNUSUAL WINDOWED CLOSET THROUGH OPEN FIVE-PANELED DOOR. VIEW TO WEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  11. 16. DINING ROOM INTERIOR SHOWING DOUBLE DOOR ARCHWAY INTO LIVING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. DINING ROOM INTERIOR SHOWING DOUBLE DOOR ARCHWAY INTO LIVING ROOM AND DOUBLE FRENCH DOORS INTO SOUTH END SCREENED PORCH. VIEW TO SOUTHWEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  12. 30. BEDROOM #3 INTERIOR SHOWING 1 LIGHT OVER 1 LIGHT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    30. BEDROOM #3 INTERIOR SHOWING 1 LIGHT OVER 1 LIGHT WINDOW ON EAST WALL AND PARTIALLY OPENED DOOR TO WINDOWED CLOSET. VIEW TO EAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  13. Psycho-informatics: Big Data shaping modern psychometrics.

    PubMed

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The association between plasma big endothelin-1 levels at admission and long-term outcomes in patients with atrial fibrillation.

    PubMed

    Wu, Shuang; Yang, Yan-Min; Zhu, Jun; Ren, Jia-Meng; Wang, Juan; Zhang, Han; Shao, Xing-Hui

    2018-05-01

    The prognostic role of big endothelin-1 (ET-1) in atrial fibrillation (AF) is unclear. We aimed to assess its predictive value in patients with AF. A total of 716 AF patients were enrolled and divided into two groups based on the optimal cut-off value of big ET-1 in predicting all-cause mortality. The primary outcomes were all-cause mortality and major adverse events (MAEs). Cox regression analysis and net reclassification improvement (NRI) analysis were performed to assess the predictive value of big ET-1 on outcomes. With the optimal cut-off value of 0.55 pmol/L, 326 patients were classified into the high big ET-1 levels group. Cardiac dysfunction and left atrial dilation were factors related to high big ET-1 levels. During a median follow-up of 3 years, patients with big ET-1 ≥ 0.55 pmol/L had notably higher risk of all-cause death (44.8% vs. 11.5%, p < 0.001), MAEs (51.8% vs. 17.4%, p < 0.001), cardiovascular death, major bleeding, and tended to have higher thromboembolic risk. After adjusting for confounding factors, high big ET-1 level was an independent predictor of all-cause mortality (hazard ratio (HR) 2.11, 95% confidence interval (CI) 1.46-3.05; p < 0.001), MAEs (HR 2.05, 95% CI 1.50-2.80; p = 0.001), and cardiovascular death (HR 2.44, 95% CI 1.52-3.93; p < 0.001). NRI analysis showed that big ET-1 allowed a significant improvement of 0.32 in the accuracy of predicting the risk of both all-cause mortality and MAEs. Elevated big ET-1 levels is an independent predictor of long-term all-cause mortality, MAEs, and cardiovascular death in patients with AF. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A peek into the future of radiology using big data applications

    PubMed Central

    Kharat, Amit T.; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs – Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs – Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, “big data should not become “dump data” due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized and individualized healthcare. PMID:28744087

  16. The Jossey-Bass Reader on School Reform. The Jossey-Bass Education Series.

    ERIC Educational Resources Information Center

    2001

    This anthology is intended to serve as an introduction to some of the big issues that shaped and continue to shape policy, practice, and debate over public schooling. Perspectives on these issues are presented in 32 chapters: (1) "The Educational Situation" (John Dewey); (2) "Progress or Regress?" (David Tyack and Larry Cuban);…

  17. "I Am Not a Big Man": Evaluation of the Issue Investigation Program

    ERIC Educational Resources Information Center

    Cincera, Jan; Simonova, Petra

    2017-01-01

    The article evaluates a Czech environmental education program focused on developing competence in issue investigation. In the evaluation, a simple quasi-experimental design with experimental (N = 200) and control groups was used. The results suggest that the program had a greater impact on girls than on boys, and that it increased their internal…

  18. What's the Next Big Thing for Boards?

    ERIC Educational Resources Information Center

    Trusteeship, 2011

    2011-01-01

    What should boards be concerned about in 2012? What new issues--or old aspects of new issues--are on the horizon that boards should be addressing? What crucial topics should be on their agendas? Nine people who have years of broad and varied experience in higher education and its governance were asked for their views. As members of the AGB…

  19. Endothelin-1 and big endothelin-1 in NIDDM patients with and without microangiopathy.

    PubMed

    Kamoi, K; Ishibashi, M; Yamaji, T

    1994-07-01

    To examine a possible role for endothelin-1 in the pathophysiology of diabetic microangiopathy, we measured plasma levels of endothelin-1 and big endothelin-1, a precursor peptide of endothelin-1, in 33 untreated patients with non-insulin-dependent diabetes mellitus. There was no significant difference among the mean plasma endothelin-1 concentrations in 18 patients with microangiopathy, in 15 patients without microangiopathy and in 33 age-matched normal subjects. In contrast, the mean plasma big endothelin-1 concentration in patients with microangiopathy was significantly higher than in those without microangiopathy or in normal subjects. As a consequence, the mean big endothelin-1 to endothelin-1 ratio in patients with microangiopathy was significantly higher than in the other two groups. There was no significant correlation between plasma levels of endothelin-1 or big endothelin-1 and fasting blood glucose, HbA1c, mean blood pressure, or period of duration of diabetes mellitus in the patient groups. The results indicate that elevation of plasma big endothelin-1 levels with diminished conversion of big endothelin-1 to endothelin-1 is associated with diabetic microangiopathy, which may be the effect rather than the cause of endothelial dysfunction.

  20. Tuberculosis control in big cities and urban risk groups in the European Union: a consensus statement.

    PubMed

    van Hest, N A; Aldridge, R W; de Vries, G; Sandgren, A; Hauer, B; Hayward, A; Arrazola de Oñate, W; Haas, W; Codecasa, L R; Caylà, J A; Story, A; Antoine, D; Gori, A; Quabeck, L; Jonsson, J; Wanlin, M; Orcau, Å; Rodes, A; Dedicoat, M; Antoun, F; van Deutekom, H; Keizer, St; Abubakar, I

    2014-03-06

    In low-incidence countries in the European Union (EU), tuberculosis (TB) is concentrated in big cities, especially among certain urban high-risk groups including immigrants from TB high-incidence countries, homeless people, and those with a history of drug and alcohol misuse. Elimination of TB in European big cities requires control measures focused on multiple layers of the urban population. The particular complexities of major EU metropolises, for example high population density and social structure, create specific opportunities for transmission, but also enable targeted TB control interventions, not efficient in the general population, to be effective or cost effective. Lessons can be learnt from across the EU and this consensus statement on TB control in big cities and urban risk groups was prepared by a working group representing various EU big cities, brought together on the initiative of the European Centre for Disease Prevention and Control. The consensus statement describes general and specific social, educational, operational, organisational, legal and monitoring TB control interventions in EU big cities, as well as providing recommendations for big city TB control, based upon a conceptual TB transmission and control model.

  1. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    PubMed

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  2. 77 FR 67324 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ...). Specifically, it addresses the flooding sources Big Run, Little Loyalsock Creek, Loyalsock Creek, and Muncy..., Pennsylvania (All Jurisdictions)'' addressed the flooding sources Big Run, Little Loyalsock Creek, Loyalsock... Sullivan County, Pennsylvania (All Jurisdictions) Big Run At the Muncy Creek +968 +965 Township of Davidson...

  3. Big Creek Hydroelectric System, East & West Transmission Line, 241mile ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Big Creek Hydroelectric System, East & West Transmission Line, 241-mile transmission corridor extending between the Big Creek Hydroelectric System in the Sierra National Forest in Fresno County and the Eagle Rock Substation in Los Angeles, California, Visalia, Tulare County, CA

  4. Artemisia tridenata seed bank densities following wildfires

    USDA-ARS?s Scientific Manuscript database

    Big sagebrush (Artemisia spp.) is a critical shrub to such sagebrush obligate species as sage grouse, (Centocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush do not sprout after wildfires and big sagebrush seed is generally short-lived a...

  5. 4. EXTERIOR OF EAST SIDE SHOWING STAIRS TO CATWALK AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. EXTERIOR OF EAST SIDE SHOWING STAIRS TO CATWALK AND OPEN UTILITY ROOM DOOR. OPEN DOOR AT BOTTOM OF STAIRS LEADS TO BASEMENT. VIEW TO SOUTHWEST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  6. 24. BEDROOM #1 INTERIOR SHOWING OPEN DOOR TO HALL WITH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. BEDROOM #1 INTERIOR SHOWING OPEN DOOR TO HALL WITH HALL LINEN CLOSETS VISIBLE IN BACKGROUND, AND PARTIALLY OPEN DOOR TO CLOSET. VIEW TO EAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  7. Implementing Big History.

    ERIC Educational Resources Information Center

    Welter, Mark

    2000-01-01

    Contends that world history should be taught as "Big History," a view that includes all space and time beginning with the Big Bang. Discusses five "Cardinal Questions" that serve as a course structure and address the following concepts: perspectives, diversity, change and continuity, interdependence, and causes. (CMK)

  8. 11. INTERIOR OF WEST SIDE ENCLOSED SCREEN PORCH IN OPPOSITE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR OF WEST SIDE ENCLOSED SCREEN PORCH IN OPPOSITE VIEW FROM CA-167-A-8. DOUBLE FRENCH DOORS LEAD TO BEDROOM #2. VIEW TO NORTHEAST. - Big Creek Hydroelectric System, Powerhouse 8, Operator Cottage, Big Creek, Big Creek, Fresno County, CA

  9. Ethics and Epistemology of Big Data.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian

    2017-12-01

    In this Symposium on the Ethics and Epistemology of Big Data, we present four perspectives on the ways in which the rapid growth in size of research databanks-i.e. their shift into the realm of "big data"-has changed their moral, socio-political, and epistemic status. While there is clearly something different about "big data" databanks, we encourage readers to place the arguments presented in this Symposium in the context of longstanding debates about the ethics, politics, and epistemology of biobank, database, genetic, and epidemiological research.

  10. [Three applications and the challenge of the big data in otology].

    PubMed

    Lei, Guanxiong; Li, Jianan; Shen, Weidong; Yang, Shiming

    2016-03-01

    With the expansion of human practical activities, more and more areas have suffered from big data problems. The emergence of big data requires people to update the research paradigm and develop new technical methods. This review discussed that big data might bring opportunities and challenges in the area of auditory implantation, the deafness genome, and auditory pathophysiology, and pointed out that we needed to find appropriate theories and methods to make this kind of expectation into reality.

  11. The Big Bang Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  12. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    PubMed

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  13. [Structural Change, Contextuality, and Transfer in Health Promotion--Sustainable Implementation of the BIG Project].

    PubMed

    Rütten, A; Frahsa, A; Rosenhäger, N; Wolff, A

    2015-09-01

    The BIG approach aims at promoting physical activity and health among socially disadvantaged women. BIG has been developed and sustainably implemented in Erlangen/Bavaria. Subsequently, it has been transferred to other communities and states in Germany. Crucial factors for sustainability and transfer in BIG are (1) lifestyle and policy analysis, (2) assets approach, (3) empowerment of target group, (4) enabling of policy-makers and professionals. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the ecology of big sagebrush regeneration should benefit resource management activities and increase the ability of land managers to anticipate global change impacts.

  15. Big dynorphin, a prodynorphin-derived peptide produces NMDA receptor-mediated effects on memory, anxiolytic-like and locomotor behavior in mice.

    PubMed

    Kuzmin, Alexander; Madjid, Nather; Terenius, Lars; Ogren, Sven Ove; Bakalkin, Georgy

    2006-09-01

    Effects of big dynorphin (Big Dyn), a prodynorphin-derived peptide consisting of dynorphin A (Dyn A) and dynorphin B (Dyn B) on memory function, anxiety, and locomotor activity were studied in mice and compared to those of Dyn A and Dyn B. All peptides administered i.c.v. increased step-through latency in the passive avoidance test with the maximum effective doses of 2.5, 0.005, and 0.7 nmol/animal, respectively. Effects of Big Dyn were inhibited by MK 801 (0.1 mg/kg), an NMDA ion-channel blocker whereas those of dynorphins A and B were blocked by the kappa-opioid antagonist nor-binaltorphimine (6 mg/kg). Big Dyn (2.5 nmol) enhanced locomotor activity in the open field test and induced anxiolytic-like behavior both effects blocked by MK 801. No changes in locomotor activity and no signs of anxiolytic-like behavior were produced by dynorphins A and B. Big Dyn (2.5 nmol) increased time spent in the open branches of the elevated plus maze apparatus with no changes in general locomotion. Whereas dynorphins A and B (i.c.v., 0.05 and 7 nmol/animal, respectively) produced analgesia in the hot-plate test Big Dyn did not. Thus, Big Dyn differs from its fragments dynorphins A and B in its unique pattern of memory enhancing, locomotor- and anxiolytic-like effects that are sensitive to the NMDA receptor blockade. The findings suggest that Big Dyn has its own function in the brain different from those of the prodynorphin-derived peptides acting through kappa-opioid receptors.

  16. The Value of Big Endothelin-1 in the Assessment of the Severity of Coronary Artery Calcification.

    PubMed

    Wang, Fang; Li, Tiewei; Cong, Xiangfeng; Hou, Zhihui; Lu, Bin; Zhou, Zhou; Chen, Xi

    2018-01-01

    Progression of coronary artery calcification (CAC) was significantly associated with all-cause mortality, and high coronary artery calcium score (CACS) portends a particularly high risk of cardiovascular events. But how often one should rescan is still an unanswered question. Preliminary screening by testing circulating biomarker may be an alternative before repeat computed tomography (CT) scan. The aim of this study was to investigate the value of big endothelin-1 (bigET-1), the precursor of endothelin-1 (ET-1), in predicting the severity of CAC. A total of 428 consecutively patients who performed coronary computed tomography angiography (CCTA) due to chest pain in Fuwai Hospital were included in the study. The clinical characteristics, CACS, and laboratory data were collected, and plasma bigET-1 was detected by enzyme-linked immunosorbent assay (ELISA). The bigET-1 was positively correlated with the CACS ( r = .232, P < .001), and the prevalence of CACS >400 increased significantly in the highest bigET-1 tertile than the lowest tertile. Multivariate analysis showed that bigET-1was the independent predictor of the presence of CACS >400 (odds ratio [OR] = 1.721, 95% confidence interval [CI], 1.002-2.956, P = .049). The receiver operating characteristic (ROC) curve analysis showed that the optimal cutoff value of bigET-1 for predicting CACS >400 was 0.38 pmol/L, with a sensitivity of 59% and specificity of 68% (area under curve [AUC] = 0.65, 95% CI, 0.58-0.72, P < .001). The present study demonstrated that the circulating bigET-1 was valuable in the assessment of the severity of CAC.

  17. Effects of phosphoramidon on endothelin-1 and big endothelin-1 production in human aortic endothelial cells.

    PubMed

    Matsumura, Y; Tsukahara, Y; Kojima, T; Murata, S; Murakami, A; Takada, K; Takaoka, M; Morimoto, S

    1995-03-01

    Using cultured human aortic endothelial cells, we examined the effects of phosphoramidon, an endothelin converting enzyme (ECE) inhibitor, on the release of endogenous endothelin-1 (ET-1) and big endothelin-1 (big ET-1), and on the generation of ET-1 from exogenously applied big ET-1. Phosphoramidon, at concentrations of 10(-6) to 2 x 10(-4) M, caused a biphasic alteration of the ET-1 release, i.e., at lower concentrations of the drug, there were slight but unexpected increases of the release, whereas higher concentrations led to a decrease which is due to the drug-induced inhibition of ECE. The former effect appears to be based on the inhibition of ET-1 degradation by neutral endopeptidase 24.11 (NEP), since kelatorphan, a specific NEP inhibitor, produced a similar increasing effect on ET-1 release. Phosphoramidon enhanced the big ET-1 release from the cells in a concentration-dependent manner. When high concentrations of phosphoramidon were added, there was a dramatic increase in the release of big ET-1, which cannot be explained only by the drug-induced inhibition of ECE. This increase in big ET-1 release appeared to be partly due to a transient stimulation of the expression of prepro ET-1 mRNA. The amount of ET-1 generated from exogenously applied big ET-1 was markedly decreased by phosphoramidon in a concentration-dependent manner. In a similar fashion, phosphoramidon markedly inhibited ECE activity of the membrane fraction of cultured cells. Thus, ET-1 generation from exogenously applied big ET-1 reflects the functional phosphoramidon-sensitive ECE activities in human aortic endothelial cells.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Evidence for metalloprotease involvement in the in vivo effects of big endothelin 1.

    PubMed

    Pollock, D M; Opgenorth, T J

    1991-07-01

    The potent vasoconstrictor endothelin 1 (ET-1) is thought to arise from the proteolytic processing of big endothelin 1 (Big ET) by a unique endothelin-converting enzyme, possibly a metalloprotease. Experiments were conducted to determine the effects of Big ET on cardiovascular and renal functions during inhibition of metalloprotease activity in vivo. Intravenous infusion of Big ET (0.1 nmol.kg-1.min-1) in anesthetized euvolemic rats produced a significant increase in mean arterial pressure (MAP; 39 +/- 8%) and a decrease in effective renal plasma flow (ERPF; -39 +/- 2%), whereas glomerular filtration rate (GFR) remained unchanged (-8 +/- 8%). Simultaneous intravenous infusion of phosphoramidon (0.25 mg.kg-1.min-1), an inhibitor of metalloprotease activity including neutral endopeptidase EC 3.4.24.11 (NEP), completely prevented these effects of Big ET. Thiorphan (0.1 mg.kg-1.min-1), also an inhibitor of NEP, had absolutely no effect on either the renal or cardiovascular response to Big ET. Similarly, the response to Big ET was unaffected by infusion of enalaprilat (0.1 mg.kg-1.min-1), an inhibitor of the angiotensin-converting enzyme, which is also a metalloprotease. To determine whether the effect of phosphoramidon was due to antagonism of ET-1, an identical series of experiments was performed using ET-1 infusion (0.02 nmol.kg-1.min-1). Although the increase in MAP (24 +/- 5%) produced by ET-1 was less than that observed for the given dose of Big ET, the renal vasoconstriction was much more severe; the smaller peptide changed ERPF and GFR by -66 +/- 7 and -54 +/- 9%, respectively.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. The effect of phonics-enhanced Big Book reading on the language and literacy skills of 6-year-old pupils of different reading ability attending lower SES schools.

    PubMed

    Tse, Laura; Nicholson, Tom

    2014-01-01

    The purpose of this study was to improve the literacy achievement of lower socioeconomic status (SES) children by combining explicit phonics with Big Book reading. Big Book reading is a component of the text-centered (or book reading) approach used in New Zealand schools. It involves the teacher in reading an enlarged book to children and demonstrating how to use semantic, syntactic, and grapho-phonic cues to learn to read. There has been little research, however, to find out whether the effectiveness of Big Book reading is enhanced by adding explicit phonics. In this study, a group of 96 second graders from three lower SES primary schools in New Zealand were taught in 24 small groups of four, tracked into three different reading ability levels. All pupils were randomly assigned to one of four treatment conditions: a control group who received math instruction, Big Book reading enhanced with phonics (BB/EP), Big Book reading on its own, and Phonics on its own. The results showed that the BB/EP group made significantly better progress than the Big Book and Phonics groups in word reading, reading comprehension, spelling, and phonemic awareness. In reading accuracy, the BB/EP and Big Book groups scored similarly. In basic decoding skills the BB/EP and Phonics groups scored similarly. The combined instruction, compared with Big Book reading and phonics, appeared to have no comparative disadvantages and considerable advantages. The present findings could be a model for New Zealand and other countries in their efforts to increase the literacy achievement of disadvantaged pupils.

  20. Association of Big Endothelin-1 with Coronary Artery Calcification.

    PubMed

    Qing, Ping; Li, Xiao-Lin; Zhang, Yan; Li, Yi-Lin; Xu, Rui-Xia; Guo, Yuan-Lin; Li, Sha; Wu, Na-Qiong; Li, Jian-Jun

    2015-01-01

    The coronary artery calcification (CAC) is clinically considered as one of the important predictors of atherosclerosis. Several studies have confirmed that endothelin-1(ET-1) plays an important role in the process of atherosclerosis formation. The aim of this study was to investigate whether big ET-1 is associated with CAC. A total of 510 consecutively admitted patients from February 2011 to May 2012 in Fu Wai Hospital were analyzed. All patients had received coronary computed tomography angiography and then divided into two groups based on the results of coronary artery calcium score (CACS). The clinical characteristics including traditional and calcification-related risk factors were collected and plasma big ET-1 level was measured by ELISA. Patients with CAC had significantly elevated big ET-1 level compared with those without CAC (0.5 ± 0.4 vs. 0.2 ± 0.2, P<0.001). In the multivariate analysis, big ET-1 (Tertile 2, HR = 3.09, 95% CI 1.66-5.74, P <0.001, Tertile3 HR = 10.42, 95% CI 3.62-29.99, P<0.001) appeared as an independent predictive factor of the presence of CAC. There was a positive correlation of the big ET-1 level with CACS (r = 0.567, p<0.001). The 10-year Framingham risk (%) was higher in the group with CACS>0 and the highest tertile of big ET-1 (P<0.01). The area under the receiver operating characteristic curve for the big ET-1 level in predicting CAC was 0.83 (95% CI 0.79-0.87, p<0.001), with a sensitivity of 70.6% and specificity of 87.7%. The data firstly demonstrated that the plasma big ET-1 level was a valuable independent predictor for CAC in our study.

  1. Disproof of Big Bang's Foundational Expansion Redshift Assumption Overthrows the Big Bang and Its No-Center Universe and Is Replaced by a Spherically Symmetric Model with Nearby Center with the 2.73 K CMR Explained by Vacuum Gravity and Doppler Effects

    NASA Astrophysics Data System (ADS)

    Gentry, Robert

    2015-04-01

    Big bang theory holds its central expansion redshift assumption quickly reduced the theorized radiation flash to ~ 1010 K, and then over 13.8 billion years reduced it further to the present 2.73 K CMR. Weinberg claims this 2.73 K value agrees with big bang theory so well that ``...we can be sure that this radiation was indeed left over from a time about a million years after the `big bang.' '' (TF3M, p180, 1993 ed.) Actually his conclusion is all based on big bang's in-flight wavelength expansion being a valid physical process. In fact all his surmising is nothing but science fiction because our disproof of GR-induced in-flight wavelength expansion [1] definitely proves the 2.73 K CMR could never have been the wavelength-expanded relic of any radiation, much less the presumed big bang's. This disproof of big bang's premier prediction is a death blow to the big bang as it is also to the idea that the redshifts in Hubble's redshift relation are expansion shifts; this negates Friedmann's everywhere-the-same, no-center universe concept and proves it does have a nearby Center, a place which can be identified in Psalm 103:19 and in Revelation 20:11 as the location of God's eternal throne. Widely published (Science, Nature, ARNS) evidence of Earth's fiat creation will also be presented. The research is supported by the God of Creation. This paper [1] is in for publication.

  2. The effect of phonics-enhanced Big Book reading on the language and literacy skills of 6-year-old pupils of different reading ability attending lower SES schools

    PubMed Central

    Tse, Laura; Nicholson, Tom

    2014-01-01

    The purpose of this study was to improve the literacy achievement of lower socioeconomic status (SES) children by combining explicit phonics with Big Book reading. Big Book reading is a component of the text-centered (or book reading) approach used in New Zealand schools. It involves the teacher in reading an enlarged book to children and demonstrating how to use semantic, syntactic, and grapho-phonic cues to learn to read. There has been little research, however, to find out whether the effectiveness of Big Book reading is enhanced by adding explicit phonics. In this study, a group of 96 second graders from three lower SES primary schools in New Zealand were taught in 24 small groups of four, tracked into three different reading ability levels. All pupils were randomly assigned to one of four treatment conditions: a control group who received math instruction, Big Book reading enhanced with phonics (BB/EP), Big Book reading on its own, and Phonics on its own. The results showed that the BB/EP group made significantly better progress than the Big Book and Phonics groups in word reading, reading comprehension, spelling, and phonemic awareness. In reading accuracy, the BB/EP and Big Book groups scored similarly. In basic decoding skills the BB/EP and Phonics groups scored similarly. The combined instruction, compared with Big Book reading and phonics, appeared to have no comparative disadvantages and considerable advantages. The present findings could be a model for New Zealand and other countries in their efforts to increase the literacy achievement of disadvantaged pupils. PMID:25431560

  3. Reconnaissance-level assessment of water quality near Flandreau, South Dakota

    USGS Publications Warehouse

    Schaap, Bryan D.

    2002-01-01

    This report presents water-quality data that have been compiled and collected for a reconnaissance-level assessment of water quality near Flandreau, South Dakota. The investigation was initiated as a cooperative effort between the U.S. Geological Survey and the Flandreau Santee Sioux Tribe. Members of the Flandreau Santee Sioux Tribe have expressed concern that Tribal members residing in the city of Flandreau experience more health problems than the general population in the surrounding area. Prior to December 2000, water for the city of Flandreau was supplied by wells completed in the Big Sioux aquifer within the city of Flandreau. After December 2000, water for the city of Flandreau was supplied by the Big Sioux Community Water System from wells completed in the Big Sioux aquifer along the Big Sioux River near Egan, about 8 river miles downstream of Flandreau. There is some concern that the public and private water supplies provided by wells completed in the Big Sioux aquifer near the Big Sioux River may contain chemicals that contribute to the health problems. Data compiled from other investigations provide information about the water quality of the Big Sioux River and the Big Sioux aquifer in the Flandreau area from 1978 through 2001. The median, minimum, and maximum values are presented for fecal bacteria, nitrate, arsenic, and atrazine. Nitrate concentrations of water from Flandreau public-supply wells occasionally exceeded the Maximum Contaminant Level of 10 milligrams per liter for public drinking water. For this study, untreated-water samples were collected from the Big Sioux River in Flandreau and from five wells completed in the Big Sioux aquifer in and near Flandreau. Treated-water samples from the Big Sioux Community Water System were collected at a site about midway between the treatment facility near Egan and the city of Flandreau. The first round of sampling occurred during July 9-12, 2001, and the second round of sampling occurred during August 20-27, 2001. Samples were analyzed for a broad range of compounds, including major ions, nutrients, trace elements, pesticides, antibiotics, and organic wastewater compounds, some of which might cause adverse health effects after long-term exposure. Samples collected on August 27, 2001, from the Big Sioux River also were analyzed for human pharmaceutical compounds. The quality of the water in the Big Sioux River and the Big Sioux aquifer in the Flandreau area cannot be thoroughly characterized with the limited number of samples collected within a 2-month period, and for many analytes, neither drinking-water standards nor associations with adverse health effects have been established. Concentrations of some selected analytes were less than U.S. Environmental Protection Agency drinking-water standards at the time of the sampling, and concentrations of most organic compounds were less than the respective method reporting levels for most of the samples.

  4. Education Matters, October 2011

    ERIC Educational Resources Information Center

    Beckner, Gary, Ed.

    2011-01-01

    "Education Matters" is the monthly newsletter of the Association of American Educators (AAE), an organization dedicated to advancing the American teaching profession through personal growth, professional development, teacher advocacy and protection. This issue of the newsletter includes: (1) The Big Shift: Changing Demographics in the…

  5. Library Skills.

    ERIC Educational Resources Information Center

    Paul, Karin; Kuhlthau, Carol C.; Branch, Jennifer L.; Solowan, Diane Galloway; Case, Roland; Abilock, Debbie; Eisenberg, Michael B.; Koechlin, Carol; Zwaan, Sandi; Hughes, Sandra; Low, Ann; Litch, Margaret; Lowry, Cindy; Irvine, Linda; Stimson, Margaret; Schlarb, Irene; Wilson, Janet; Warriner, Emily; Parsons, Les; Luongo-Orlando, Katherine; Hamilton, Donald

    2003-01-01

    Includes 19 articles that address issues related to library skills and Canadian school libraries. Topics include information literacy; inquiry learning; critical thinking and electronic research; collaborative inquiry; information skills and the Big 6 approach to problem solving; student use of online databases; library skills; Internet accuracy;…

  6. Reproducible Bioinformatics Research for Biologists

    USDA-ARS?s Scientific Manuscript database

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  7. Brief report: How short is too short? An ultra-brief measure of the big-five personality domains implicates "agreeableness" as a risk for all-cause mortality.

    PubMed

    Chapman, Benjamin P; Elliot, Ari J

    2017-08-01

    Controversy exists over the use of brief Big Five scales in health studies. We investigated links between an ultra-brief measure, the Big Five Inventory-10, and mortality in the General Social Survey. The Agreeableness scale was associated with elevated mortality risk (hazard ratio = 1.26, p = .017). This effect was attributable to the reversed-scored item "Tends to find fault with others," so that greater fault-finding predicted lower mortality risk. The Conscientiousness scale approached meta-analytic estimates, which were not precise enough for significance. Those seeking Big Five measurement in health studies should be aware that the Big Five Inventory-10 may yield unusual results.

  8. Quantum Gravity in Cyclic (ekpyrotic) and Multiple (anthropic) Universes with Strings And/or Loops

    NASA Astrophysics Data System (ADS)

    Chung, T. J.

    2008-09-01

    This paper addresses a hypothetical extension of ekpyrotic and anthropic principles, implying cyclic and multiple universes, respectively. Under these hypotheses, from time immemorial (t = -∞), a universe undergoes a big bang from a singularity, initially expanding and eventually contracting to another singularity (big crunch). This is to prepare for the next big bang, repeating these cycles toward eternity (t = +∞), every 30 billion years apart. Infinity in time backward and forward (t = ±∞) is paralleled with infinity in space (Xi = ±∞), allowing multiple universes to prevail, each undergoing big bangs and big crunches similarly as our own universe. It is postulated that either string theory and /or loop quantum gravity might be able to substantiate these hypotheses.

  9. Bigfoot Field Manual

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Burrows, S.; Gower, S. T.; Cohen, W. B.

    1999-09-01

    The BigFoot Project is funded by the Earth Science Enterprise to collect and organize data to be used in the EOS Validation Program. The data collected by the BigFoot Project are unique in being ground-based observations coincident with satellite overpasses. In addition to collecting data, the BigFoot project will develop and test new algorithms for scaling point measurements to the same spatial scales as the EOS satellite products. This BigFoot Field Manual Mill be used to achieve completeness and consistency of data collected at four initial BigFoot sites and at future sites that may collect similar validation data. Therefore, validation datasets submitted to the ORNL DAAC that have been compiled in a manner consistent with the field manual will be especially valuable in the validation program.

  10. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. 78 FR 8089 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-05

    ... flooding sources Big Run, Little Loyalsock Creek, Loyalsock Creek, and Muncy Creek. DATES: Comments are to..., Pennsylvania (All Jurisdictions)'' addressed the flooding sources Big Run, Little Loyalsock Creek, Loyalsock... Sullivan County, Pennsylvania (All Jurisdictions) Big Run At the Muncy Creek +968 +965 Township of Davidson...

  12. 77 FR 51745 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    .... Specifically, it addresses the following flooding sources: Back Creek, Big Elk Creek, Bohemia River, Chesapeake... Areas'' addressed the following flooding sources: Back Creek, Big Elk Creek, Bohemia River, Chesapeake... modified elevation in feet, and/or communities affected for the following flooding sources: Big Elk Creek...

  13. 40 CFR 62.4680 - Identification of sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Big Cajun 2 (Unit 1) at New Roads, LA. (b) Big Cajun 2 (Unit 2) at New Roads, LA. (c) Big Cajun 2 (Unit 3) at New Roads, LA. (d) Rodemacher (Unit 2) at Lena, LA. (e) R.S. Nelson (Unit 6) at Westlake, LA...

  14. 78 FR 52523 - Big Rivers Electric Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket Nos. EL13-85-000] Big Rivers Electric Corporation; Notice of Filing Take notice that on August 16, 2013, Big Rivers Electric Corporation filed its proposed revenue requirements for reactive supply service under Midcontinent Independent...

  15. 7. HOUSE SOUTH SIDE EXTERIOR SHOWING ENCLOSED SLEEPING PORCH AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. HOUSE SOUTH SIDE EXTERIOR SHOWING ENCLOSED SLEEPING PORCH AND CASEMENT WINDOW INTO ATTIC AT PEAK OF GABLE. VIEW TO NORTH. - Big Creek Hydroelectric System, Big Creek Town, Operator House, Orchard Avenue south of Huntington Lake Road, Big Creek, Fresno County, CA

  16. Big I (I-40/I-25) reconstruction & ITS infrastructure.

    DOT National Transportation Integrated Search

    2010-04-20

    The New Mexico Department of Transportation (NMDOT) rebuilt the Big I interchange in Albuquerque to make it safer and more efficient and to provide better access. The Big I is where the Coronado Interstate (I-40) and the Pan American Freeway (I-25) i...

  17. Reducing Racial Disparities in Breast Cancer Care: The Role of 'Big Data'.

    PubMed

    Reeder-Hayes, Katherine E; Troester, Melissa A; Meyer, Anne-Marie

    2017-10-15

    Advances in a wide array of scientific technologies have brought data of unprecedented volume and complexity into the oncology research space. These novel big data resources are applied across a variety of contexts-from health services research using data from insurance claims, cancer registries, and electronic health records, to deeper and broader genomic characterizations of disease. Several forms of big data show promise for improving our understanding of racial disparities in breast cancer, and for powering more intelligent and far-reaching interventions to close the racial gap in breast cancer survival. In this article we introduce several major types of big data used in breast cancer disparities research, highlight important findings to date, and discuss how big data may transform breast cancer disparities research in ways that lead to meaningful, lifesaving changes in breast cancer screening and treatment. We also discuss key challenges that may hinder progress in using big data for cancer disparities research and quality improvement.

  18. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  19. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  20. Research on Technology Innovation Management in Big Data Environment

    NASA Astrophysics Data System (ADS)

    Ma, Yanhong

    2018-02-01

    With the continuous development and progress of the information age, the demand for information is getting larger. The processing and analysis of information data is also moving toward the direction of scale. The increasing number of information data makes people have higher demands on processing technology. The explosive growth of information data onto the current society have prompted the advent of the era of big data. At present, people have more value and significance in producing and processing various kinds of information and data in their lives. How to use big data technology to process and analyze information data quickly to improve the level of big data management is an important stage to promote the current development of information and data processing technology in our country. To some extent, innovative research on the management methods of information technology in the era of big data can enhance our overall strength and make China be an invincible position in the development of the big data era.

Top