Beyond simple charts: Design of visualizations for big health data
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416
Beyond simple charts: Design of visualizations for big health data.
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.
Isaacson, Sara; O'Brien, Ashley; Lazaro, Jennifer D; Ray, Arlen; Fluet, Gerard
2018-04-01
[Purpose] The aim of this study was to test the hypothesis that Lee Silverman Voice Treatment-BIG decreases the negative impact of hypokinesia on dual task performance in persons with Parkinson's disease. [Subjects and Methods] The records of 114 patients with Parkinson's admitted to outpatient rehabilitation at a suburban hospital were reviewed. Demographics and data for 8 outcome measures were extracted for subjects that completed 14 of 16 sessions of BIG. 93 of these subjects had records of pre and post-test Timed Up and Go, Timed Up and Go Motor, and Timed Up and Go Cognitive scores. Average age was 68.4 years (SD=10.6) and average disease duration was 4.9 years (SD=5.3). [Results] Subjects demonstrated statistically significant improvements for Timed Up and Go (3.3 SD=4.5), Timed Up and Go Motor (4.4 SD=5.8) and Timed Up and Go Cognitive (4.7 SD=5.4). Concurrent motor and cognitive performance remained stable. Dual task cost decreased at a statistically significant level for Timed Up and Go Cognitive (7% SD=31%) but not Motor (4% SD=32%). [Conclusion] These findings suggest that cueing strategies associated with LSVT BIG become internalized and decrease the negative impact of hypokinesia on mobility and cognitive performance while performing two tasks simultaneously in persons with Parkinson's.
USDA-ARS?s Scientific Manuscript database
The entomologists at the Arthropod-Borne Animal Diseases Research Unit at USDA-Agricultural Research Service are tasked with protecting the nation’s livestock from domestic, foreign and emerging vector-borne diseases. To accomplish this task, a vast array of molecular techniques are being used in pr...
Isaacson, Sara; O’Brien, Ashley; Lazaro, Jennifer D.; Ray, Arlen; Fluet, Gerard
2018-01-01
[Purpose] The aim of this study was to test the hypothesis that Lee Silverman Voice Treatment-BIG decreases the negative impact of hypokinesia on dual task performance in persons with Parkinson’s disease. [Subjects and Methods] The records of 114 patients with Parkinson’s admitted to outpatient rehabilitation at a suburban hospital were reviewed. Demographics and data for 8 outcome measures were extracted for subjects that completed 14 of 16 sessions of BIG. 93 of these subjects had records of pre and post-test Timed Up and Go, Timed Up and Go Motor, and Timed Up and Go Cognitive scores. Average age was 68.4 years (SD=10.6) and average disease duration was 4.9 years (SD=5.3). [Results] Subjects demonstrated statistically significant improvements for Timed Up and Go (3.3 SD=4.5), Timed Up and Go Motor (4.4 SD=5.8) and Timed Up and Go Cognitive (4.7 SD=5.4). Concurrent motor and cognitive performance remained stable. Dual task cost decreased at a statistically significant level for Timed Up and Go Cognitive (7% SD=31%) but not Motor (4% SD=32%). [Conclusion] These findings suggest that cueing strategies associated with LSVT BIG become internalized and decrease the negative impact of hypokinesia on mobility and cognitive performance while performing two tasks simultaneously in persons with Parkinson’s. PMID:29706722
A Systematic Review of Techniques and Sources of Big Data in the Healthcare Sector.
Alonso, Susel Góngora; de la Torre Díez, Isabel; Rodrigues, Joel J P C; Hamrioui, Sofiane; López-Coronado, Miguel
2017-10-14
The main objective of this paper is to present a review of existing researches in the literature, referring to Big Data sources and techniques in health sector and to identify which of these techniques are the most used in the prediction of chronic diseases. Academic databases and systems such as IEEE Xplore, Scopus, PubMed and Science Direct were searched, considering the date of publication from 2006 until the present time. Several search criteria were established as 'techniques' OR 'sources' AND 'Big Data' AND 'medicine' OR 'health', 'techniques' AND 'Big Data' AND 'chronic diseases', etc. Selecting the paper considered of interest regarding the description of the techniques and sources of Big Data in healthcare. It found a total of 110 articles on techniques and sources of Big Data on health from which only 32 have been identified as relevant work. Many of the articles show the platforms of Big Data, sources, databases used and identify the techniques most used in the prediction of chronic diseases. From the review of the analyzed research articles, it can be noticed that the sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.
Functional magnetic resonance imaging of divergent and convergent thinking in Big-C creativity.
Japardi, Kevin; Bookheimer, Susan; Knudsen, Kendra; Ghahremani, Dara G; Bilder, Robert M
2018-02-15
The cognitive and physiological processes underlying creativity remain unclear, and very few studies to date have attempted to identify the behavioral and brain characteristics that distinguish exceptional ("Big-C") from everyday ("little-c") creativity. The Big-C Project examined functional brain responses during tasks demanding divergent and convergent thinking in 35 Big-C Visual Artists (VIS), 41 Big-C Scientists (SCI), and 31 individuals in a "smart comparison group" (SCG) matched to the Big-C groups on parental educational attainment and estimated IQ. Functional MRI (fMRI) scans included two activation paradigms widely used in prior creativity research, the Alternate Uses Task (AUT) and Remote Associates Task (RAT), to assess brain function during divergent and convergent thinking, respectively. Task performance did not differ between groups. Functional MRI activation in Big-C and SCG groups differed during the divergent thinking task. No differences in activation were seen during the convergent thinking task. Big-C groups had less activation than SCG in frontal pole, right frontal operculum, left middle frontal gyrus, and bilaterally in occipital cortex. SCI displayed lower frontal and parietal activation relative to the SCG when generating alternate uses in the AUT, while VIS displayed lower frontal activation than SCI and SCG when generating typical qualities (the control condition in the AUT). VIS showed more activation in right inferior frontal gyrus and left supramarginal gyrus relative to SCI. All groups displayed considerable overlapping activation during the RAT. The results confirm substantial overlap in functional activation across groups, but suggest that exceptionally creative individuals may depend less on task-positive networks during tasks that demand divergent thinking. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.
2017-12-01
Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.
Big-Data Based Decision-Support Systems to Improve Clinicians' Cognition.
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians' cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems.
Big-Data Based Decision-Support Systems to Improve Clinicians’ Cognition
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians’ cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems. PMID:27990498
Big 6 Tips: Teaching Information Problem Solving. #1 Task Definition: What Needs To Be Done.
ERIC Educational Resources Information Center
Eisenberg, Michael
1997-01-01
Explains task definition which is the first stage in the Big 6, an approach to information and technology skills instruction. Highlights include defining the problem; identifying the information requirements of the problem; transferability from curriculum-based problems to everyday tasks; and task definition logs kept by students. (LRW)
SEAS (Surveillance Environmental Acoustic Support Program) Support
1984-02-29
ASEPS software - Provide support for AMES - Support for OUTPOST CREOLE, BIG DIPPER and MFA , First, a summary of the tasks as delineated in the contract...addition, the contractor will provide an engineer/scientist to support the BIG DIPPER data processing activities at NOSC. Task 3: SEAS Inventory - The...SI to provide support to SEAS for the OUTPOST -’ CREOLE III exercise which followed immediately after the BIG DIPPER .. exercise. OUTPOST CREOLE III
[Methods, challenges and opportunities for big data analyses of microbiome].
Sheng, Hua-Fang; Zhou, Hong-Wei
2015-07-01
Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.
Mind the Scales: Harnessing Spatial Big Data for Infectious Disease Surveillance and Inference.
Lee, Elizabeth C; Asher, Jason M; Goldlust, Sandra; Kraemer, John D; Lawson, Andrew B; Bansal, Shweta
2016-12-01
Spatial big data have the velocity, volume, and variety of big data sources and contain additional geographic information. Digital data sources, such as medical claims, mobile phone call data records, and geographically tagged tweets, have entered infectious diseases epidemiology as novel sources of data to complement traditional infectious disease surveillance. In this work, we provide examples of how spatial big data have been used thus far in epidemiological analyses and describe opportunities for these sources to improve disease-mitigation strategies and public health coordination. In addition, we consider the technical, practical, and ethical challenges with the use of spatial big data in infectious disease surveillance and inference. Finally, we discuss the implications of the rising use of spatial big data in epidemiology to health risk communication, and public health policy recommendations and coordination across scales. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.
Bossuyt, V.; Provenzano, E.; Symmans, W. F.; Boughey, J. C.; Coles, C.; Curigliano, G.; Dixon, J. M.; Esserman, L. J.; Fastner, G.; Kuehn, T.; Peintinger, F.; von Minckwitz, G.; White, J.; Yang, W.; Badve, S.; Denkert, C.; MacGrogan, G.; Penault-Llorca, F.; Viale, G.; Cameron, D.; Earl, Helena; Alba, Emilio; Lluch, Ana; Albanell, Joan; Amos, Keith; Biernat, Wojciech; Bonnefoi, Hervé; Buzdar, Aman; Cane, Paul; Pinder, Sarah; Carson, Lesley; Dickson-Witmer, Diana; Gong, Gyungyub; Green, Jimmy; Hsu, Chih-Yi; Tseng, Ling-Ming; Kroep, Judith; Leitch, A. Marilyn; Sarode, Venetia; Mamounas, Eleftherios; Marcom, Paul Kelly; Nuciforo, Paolo; Paik, Soonmyung; Peg, Vicente; Peston, David; Pierga, Jean-Yves; Quintela-Fandino, Miguel; Salgado, Roberto; Sikov, William; Thomas, Jeremy; Unzeitig, Gary; Wesseling, Jelle
2015-01-01
Neoadjuvant systemic therapy (NAST) provides the unique opportunity to assess response to treatment after months rather than years of follow-up. However, significant variability exists in methods of pathologic assessment of response to NAST, and thus its interpretation for subsequent clinical decisions. Our international multidisciplinary working group was convened by the Breast International Group-North American Breast Cancer Group (BIG-NABCG) collaboration and tasked to recommend practical methods for standardized evaluation of the post-NAST surgical breast cancer specimen for clinical trials that promote accurate and reliable designation of pathologic complete response (pCR) and meaningful characterization of residual disease. Recommendations include multidisciplinary communication; clinical marking of the tumor site (clips); and radiologic, photographic, or pictorial imaging of the sliced specimen, to map the tissue sections and reconcile macroscopic and microscopic findings. The information required to define pCR (ypT0/is ypN0 or ypT0 yp N0), residual ypT and ypN stage using the current AJCC/UICC system, and the Residual Cancer Burden system were recommended for quantification of residual disease in clinical trials. PMID:26019189
Big data processing in the cloud - Challenges and platforms
NASA Astrophysics Data System (ADS)
Zhelev, Svetoslav; Rozeva, Anna
2017-12-01
Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.
Mind the Scales: Harnessing Spatial Big Data for Infectious Disease Surveillance and Inference
Lee, Elizabeth C.; Asher, Jason M.; Goldlust, Sandra; Kraemer, John D.; Lawson, Andrew B.; Bansal, Shweta
2016-01-01
Spatial big data have the velocity, volume, and variety of big data sources and contain additional geographic information. Digital data sources, such as medical claims, mobile phone call data records, and geographically tagged tweets, have entered infectious diseases epidemiology as novel sources of data to complement traditional infectious disease surveillance. In this work, we provide examples of how spatial big data have been used thus far in epidemiological analyses and describe opportunities for these sources to improve disease-mitigation strategies and public health coordination. In addition, we consider the technical, practical, and ethical challenges with the use of spatial big data in infectious disease surveillance and inference. Finally, we discuss the implications of the rising use of spatial big data in epidemiology to health risk communication, and public health policy recommendations and coordination across scales. PMID:28830109
Big Data in the Earth Observing System Data and Information System
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Baynes, Katie; McInerney, Mark
2016-01-01
Approaches that are being pursued for the Earth Observing System Data and Information System (EOSDIS) data system to address the challenges of Big Data were presented to the NASA Big Data Task Force. Cloud prototypes are underway to tackle the volume challenge of Big Data. However, advances in computer hardware or cloud won't help (much) with variety. Rather, interoperability standards, conventions, and community engagement are the key to addressing variety.
MIT CSAIL and Lincoln Laboratory Task Force Report
2016-08-01
projects have been very diverse, spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications...spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications, computing architectures and...to machine learning systems and algorithms, such as recommender systems, and “Big Data ” analytics . Advanced computing architectures broadly refer to
NASA unveils its big astrophysics dreams
NASA Astrophysics Data System (ADS)
Gwynne, Peter
2014-02-01
A task force appointed by the astrophysicists subcommittee of NASA's advisory council has published a report looking at the technologies needed to answer three big questions: Are we alone? How did we get here? And how does the universe work?
Baker, Tim; Khalid, Karima; Acicbe, Ozlem; McGloughlin, Steve; Amin, Pravin
2017-12-01
Tropical disease results in a great burden of critical illness. The same life-saving and supportive therapies to maintain vital organ functions that comprise critical care are required by these patients as for all other diseases. In low income countries, the little available data points towards high mortality rates and big challenges in the provision of critical care. Improving critical care in low income countries requires a focus on hospital design, training, triage, monitoring & treatment modifications, the basic principles of critical care, hygiene and the involvement of multi-disciplinary teams. As a large proportion of critical illness from tropical disease is in low income countries, the impact and reductions in mortality rates of improved critical care in such settings could be substantial. Copyright © 2017. Published by Elsevier Inc.
Task Definition: A Motivating Task = Eager Learners!
ERIC Educational Resources Information Center
Jansen, Barbara A.
2005-01-01
Teachers who design meaningful and developmentally appropriate tasks will motivate their students to engage in the content and as students work through the Big6 process, interacting with the content, they learn and practice information and technology skills. A valuable task definition technique is to develop questions that students in each group…
Bossuyt, V; Provenzano, E; Symmans, W F; Boughey, J C; Coles, C; Curigliano, G; Dixon, J M; Esserman, L J; Fastner, G; Kuehn, T; Peintinger, F; von Minckwitz, G; White, J; Yang, W; Badve, S; Denkert, C; MacGrogan, G; Penault-Llorca, F; Viale, G; Cameron, D
2015-07-01
Neoadjuvant systemic therapy (NAST) provides the unique opportunity to assess response to treatment after months rather than years of follow-up. However, significant variability exists in methods of pathologic assessment of response to NAST, and thus its interpretation for subsequent clinical decisions. Our international multidisciplinary working group was convened by the Breast International Group-North American Breast Cancer Group (BIG-NABCG) collaboration and tasked to recommend practical methods for standardized evaluation of the post-NAST surgical breast cancer specimen for clinical trials that promote accurate and reliable designation of pathologic complete response (pCR) and meaningful characterization of residual disease. Recommendations include multidisciplinary communication; clinical marking of the tumor site (clips); and radiologic, photographic, or pictorial imaging of the sliced specimen, to map the tissue sections and reconcile macroscopic and microscopic findings. The information required to define pCR (ypT0/is ypN0 or ypT0 yp N0), residual ypT and ypN stage using the current AJCC/UICC system, and the Residual Cancer Burden system were recommended for quantification of residual disease in clinical trials. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Berkowitz, Bob
1998-01-01
The Big6 Skills provide a framework for analyzing instructional variables and can be phrased as a series of questions for analysis: (1) Task Definition, (2) Information Seeking Strategies, (3) Location and Access, (4) Use of Information, (5) Synthesis, (6) Evaluation. A fictional seventh-grade teacher/class is used as an example of using the Big6…
Prince, John; Arora, Siddharth; de Vos, Maarten
2018-04-26
To better understand the longitudinal characteristics of Parkinson's disease (PD) through the analysis of finger tapping and memory tests collected remotely using smartphones. Using a large cohort (312 PD subjects and 236 controls) of participants in the mPower study, we extract clinically validated features from a finger tapping and memory test to monitor the longitudinal behaviour of study participants. We investigate any discrepancy in learning rates associated with motor and non-motor tasks between PD subjects and healthy controls. The ability of these features to predict self-assigned severity measures is assessed whilst simultaneously inspecting the severity scoring system for floor-ceiling effects. Finally, we study the relationship between motor and non-motor longitudinal behaviour to determine if separate aspects of the disease are dependent on one another. We find that the test performances of the most severe subjects show significant correlations with self-assigned severity measures. Interestingly, less severe subjects do not show significant correlations, which is shown to be a consequence of floor-ceiling effects within the mPower self-reporting severity system. We find that motor performance after practise is a better predictor of severity than baseline performance suggesting that starting performance at a new motor task is less representative of disease severity than the performance after the test has been learnt. We find PD subjects show significant impairments in motor ability as assessed through the alternating finger tapping (AFT) test in both the short- and long-term analyses. In the AFT and memory tests we demonstrate that PD subjects show a larger degree of longitudinal performance variability in addition to requiring more instances of a test to reach a steady state performance than healthy subjects. Our findings pave the way forward for objective assessment and quantification of longitudinal learning rates in PD. This can be particularly useful for symptom monitoring and assessing medication response. This study tries to tackle some of the major challenges associated with self-assessed severity labels by designing and validating features extracted from big datasets in PD, which could help identify digital biomarkers capable of providing measures of disease severity outside of a clinical environment.
Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.
Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan
2016-07-01
Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression data from a malaria vaccine trial by big-data-based edge biomarkers from module network rewiring-analysis. The illustrative results show that the identified module biomarkers can accurately distinguish vaccines with or without protection and outperformed previous reported gene signatures in terms of effectiveness and efficiency. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The Scope of Big Data in One Medicine: Unprecedented Opportunities and Challenges.
McCue, Molly E; McCoy, Annette M
2017-01-01
Advances in high-throughput molecular biology and electronic health records (EHR), coupled with increasing computer capabilities have resulted in an increased interest in the use of big data in health care. Big data require collection and analysis of data at an unprecedented scale and represents a paradigm shift in health care, offering (1) the capacity to generate new knowledge more quickly than traditional scientific approaches; (2) unbiased collection and analysis of data; and (3) a holistic understanding of biology and pathophysiology. Big data promises more personalized and precision medicine for patients with improved accuracy and earlier diagnosis, and therapy tailored to an individual's unique combination of genes, environmental risk, and precise disease phenotype. This promise comes from data collected from numerous sources, ranging from molecules to cells, to tissues, to individuals and populations-and the integration of these data into networks that improve understanding of heath and disease. Big data-driven science should play a role in propelling comparative medicine and "one medicine" (i.e., the shared physiology, pathophysiology, and disease risk factors across species) forward. Merging of data from EHR across institutions will give access to patient data on a scale previously unimaginable, allowing for precise phenotype definition and objective evaluation of risk factors and response to therapy. High-throughput molecular data will give insight into previously unexplored molecular pathophysiology and disease etiology. Investigation and integration of big data from a variety of sources will result in stronger parallels drawn at the molecular level between human and animal disease, allow for predictive modeling of infectious disease and identification of key areas of intervention, and facilitate step-changes in our understanding of disease that can make a substantial impact on animal and human health. However, the use of big data comes with significant challenges. Here we explore the scope of "big data," including its opportunities, its limitations, and what is needed capitalize on big data in one medicine.
The Scope of Big Data in One Medicine: Unprecedented Opportunities and Challenges
McCue, Molly E.; McCoy, Annette M.
2017-01-01
Advances in high-throughput molecular biology and electronic health records (EHR), coupled with increasing computer capabilities have resulted in an increased interest in the use of big data in health care. Big data require collection and analysis of data at an unprecedented scale and represents a paradigm shift in health care, offering (1) the capacity to generate new knowledge more quickly than traditional scientific approaches; (2) unbiased collection and analysis of data; and (3) a holistic understanding of biology and pathophysiology. Big data promises more personalized and precision medicine for patients with improved accuracy and earlier diagnosis, and therapy tailored to an individual’s unique combination of genes, environmental risk, and precise disease phenotype. This promise comes from data collected from numerous sources, ranging from molecules to cells, to tissues, to individuals and populations—and the integration of these data into networks that improve understanding of heath and disease. Big data-driven science should play a role in propelling comparative medicine and “one medicine” (i.e., the shared physiology, pathophysiology, and disease risk factors across species) forward. Merging of data from EHR across institutions will give access to patient data on a scale previously unimaginable, allowing for precise phenotype definition and objective evaluation of risk factors and response to therapy. High-throughput molecular data will give insight into previously unexplored molecular pathophysiology and disease etiology. Investigation and integration of big data from a variety of sources will result in stronger parallels drawn at the molecular level between human and animal disease, allow for predictive modeling of infectious disease and identification of key areas of intervention, and facilitate step-changes in our understanding of disease that can make a substantial impact on animal and human health. However, the use of big data comes with significant challenges. Here we explore the scope of “big data,” including its opportunities, its limitations, and what is needed capitalize on big data in one medicine. PMID:29201868
Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi
2012-10-15
Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.
Teaching Mathematics in Primary Schools with Challenging Tasks: The Big (Not So) Friendly Giant
ERIC Educational Resources Information Center
Russo, James
2016-01-01
The use of enabling and extending prompts allows tasks to be both accessible and challenging within a classroom. This article provides an example of how to use enabling and extending prompts effectively when employing a challenging task in Year 2.
Ebersbach, Georg; Grust, Ute; Ebersbach, Almut; Wegner, Brigitte; Gandor, Florin; Kühn, Andrea A
2015-02-01
LSVT-BIG is an exercise for patients with Parkinson's disease (PD) comprising of 16 1-h sessions within 4 weeks. LSVT-BIG was compared with a 2-week short protocol (AOT-SP) consisting of 10 sessions with identical exercises in 42 patients with PD. UPDRS-III-score was reduced by -6.6 in LSVT-BIG and -5.7 in AOT-SP at follow-up after 16 weeks (p < 0.001). Measures of motor performance were equally improved by LSVT-BIG and AOT-SP but high-intensity LSVT-BIG was more effective to obtain patient-perceived benefit.
The Impact of Big Data on Chronic Disease Management.
Bhardwaj, Niharika; Wodajo, Bezawit; Spano, Anthony; Neal, Symaron; Coustasse, Alberto
Population health management and specifically chronic disease management depend on the ability of providers to prevent development of high-cost and high-risk conditions such as diabetes, heart failure, and chronic respiratory diseases and to control them. The advent of big data analytics has potential to empower health care providers to make timely and truly evidence-based informed decisions to provide more effective and personalized treatment while reducing the costs of this care to patients. The goal of this study was to identify real-world health care applications of big data analytics to determine its effectiveness in both patient outcomes and the relief of financial burdens. The methodology for this study was a literature review utilizing 49 articles. Evidence of big data analytics being largely beneficial in the areas of risk prediction, diagnostic accuracy and patient outcome improvement, hospital readmission reduction, treatment guidance, and cost reduction was noted. Initial applications of big data analytics have proved useful in various phases of chronic disease management and could help reduce the chronic disease burden.
ERIC Educational Resources Information Center
Fukuzawa, Ryoko; Joho, Hideo; Maeshiro, Tetsuya
2015-01-01
This paper reports the results of a survey that investigated the practice and experience of task management of university students. A total of 202 tasks identified by 24 university students were analyzed. The results suggest that participants had a reasonable sense of priority of tasks, that they tend to perceive a task as a big chunk, not a…
The Role of Learning Objectives under the Academic Big Top.
ERIC Educational Resources Information Center
Kelly, Leonard P.
1980-01-01
Likens a circus performer's varied responsibilities to the learning tasks required of undergraduates. Suggests that students' inability to perform many of these tasks results from confusion about instructors' expectations and the appropriateness of various learning strategies to different tasks. Recommends that students be presented with clear…
Serum big endothelin-1 as a clinical marker for cardiopulmonary and neoplastic diseases in dogs.
Fukumoto, Shinya; Hanazono, Kiwamu; Miyasho, Taku; Endo, Yoshifumi; Kadosawa, Tsuyoshi; Iwano, Hidetomo; Uchide, Tsuyoshi
2014-11-24
Many studies of human subjects have demonstrated the utility of assessing serum levels of endothelin-1 (ET-1) and big ET-1 as clinical biomarkers in cardiopulmonary and neoplastic diseases. In this study we explored the feasibility of using serum big ET-1 as a reliable veterinary marker in dogs with various cardiopulmonary and neoplastic diseases. Serum big ET-1 levels were measured by ELISA in dogs with cardiopulmonary (n=21) and neoplastic diseases (n=57). Dogs exhibiting cardiopulmonary disease were divided into two groups based on the velocity of tricuspid valve regurgitation (3.0>m/s) measured by ultrasound: without and with pulmonary hypertension. Big ET-1 levels for the dogs with the diseases were compared with levels in normal healthy dogs (n=17). Dogs with cardiopulmonary disease (4.6±4.6 pmol/l) showed a significantly (P<0.01) higher level of big ET-1 than healthy control dogs (1.1±0.53 pmol/l). Serum levels in the dogs with pulmonary hypertension (6.2±5.3 pmol/l) were significantly (P<0.01) higher than those without pulmonary hypertension (2.0±0.6 pmol/l). Dogs with hemangiosarcoma (5.6±2.2 pmol/l), adenocarcinoma (2.0±1.8 pmol/l), histiocytic sarcoma (3.3±1.9 pmol/l), chondrosarcoma or osteosarcoma (3.0±1.6 pmol/l) and hepatocellular carcinoma (2.7±1.8 pmol/l) showed significantly (P<0.05) higher levels than healthy control dogs. These findings point to the potential of serum big ET-1 as a clinical marker for cardiopulmonary and neoplastic diseases in dogs. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Flexible Description Language for HPC based Processing of Remote Sense Data
NASA Astrophysics Data System (ADS)
Nandra, Constantin; Gorgan, Dorian; Bacu, Victor
2016-04-01
When talking about Big Data, the most challenging aspect lays in processing them in order to gain new insight, find new patterns and gain knowledge from them. This problem is likely most apparent in the case of Earth Observation (EO) data. With ever higher numbers of data sources and increasing data acquisition rates, dealing with EO data is indeed a challenge [1]. Geoscientists should address this challenge by using flexible and efficient tools and platforms. To answer this trend, the BigEarth project [2] aims to combine the advantages of high performance computing solutions with flexible processing description methodologies in order to reduce both task execution times and task definition time and effort. As a component of the BigEarth platform, WorDeL (Workflow Description Language) [3] is intended to offer a flexible, compact and modular approach to the task definition process. WorDeL, unlike other description alternatives such as Python or shell scripts, is oriented towards the description topologies, using them as abstractions for the processing programs. This feature is intended to make it an attractive alternative for users lacking in programming experience. By promoting modular designs, WorDeL not only makes the processing descriptions more user-readable and intuitive, but also helps organizing the processing tasks into independent sub-tasks, which can be executed in parallel on multi-processor platforms in order to improve execution times. As a BigEarth platform [4] component, WorDeL represents the means by which the user interacts with the system, describing processing algorithms in terms of existing operators and workflows [5], which are ultimately translated into sets of executable commands. The WorDeL language has been designed to help in the definition of compute-intensive, batch tasks which can be distributed and executed on high-performance, cloud or grid-based architectures in order to improve the processing time. Main references for further information: [1] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [2] Bigearth project - flexible processing of big earth data over high performance computing architectures. http://cgis.utcluj.ro/bigearth, (2014) [3] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [4] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015). [5] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015).
A study on specialist or special disease clinics based on big data.
Fang, Zhuyuan; Fan, Xiaowei; Chen, Gong
2014-09-01
Correlation analysis and processing of massive medical information can be implemented through big data technology to find the relevance of different factors in the life cycle of a disease and to provide the basis for scientific research and clinical practice. This paper explores the concept of constructing a big medical data platform and introduces the clinical model construction. Medical data can be collected and consolidated by distributed computing technology. Through analysis technology, such as artificial neural network and grey model, a medical model can be built. Big data analysis, such as Hadoop, can be used to construct early prediction and intervention models as well as clinical decision-making model for specialist and special disease clinics. It establishes a new model for common clinical research for specialist and special disease clinics.
Nees, Frauke; Vollstädt-Klein, Sabine; Fauth-Bühler, Mira; Steiner, Sabina; Mann, Karl; Poustka, Luise; Banaschewski, Tobias; Büchel, Christian; Conrod, Patricia J; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Artiges, Eric; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Struve, Maren; Loth, Eva; Schumann, Gunter; Flor, Herta
2012-11-01
Adolescence is a transition period that is assumed to be characterized by increased sensitivity to reward. While there is growing research on reward processing in adolescents, investigations into the engagement of brain regions under different reward-related conditions in one sample of healthy adolescents, especially in a target age group, are missing. We aimed to identify brain regions preferentially activated in a reaction time task (monetary incentive delay (MID) task) and a simple guessing task (SGT) in a sample of 14-year-old adolescents (N = 54) using two commonly used reward paradigms. Functional magnetic resonance imaging was employed during the MID with big versus small versus no win conditions and the SGT with big versus small win and big versus small loss conditions. Analyses focused on changes in blood oxygen level-dependent contrasts during reward and punishment processing in anticipation and feedback phases. We found clear magnitude-sensitive response in reward-related brain regions such as the ventral striatum during anticipation in the MID task, but not in the SGT. This was also true for reaction times. The feedback phase showed clear reward-related, but magnitude-independent, response patterns, for example in the anterior cingulate cortex, in both tasks. Our findings highlight neural and behavioral response patterns engaged in two different reward paradigms in one sample of 14-year-old healthy adolescents and might be important for reference in future studies investigating reward and punishment processing in a target age group.
Infectious Disease Surveillance in the Big Data Era: Towards Faster and Locally Relevant Systems
Simonsen, Lone; Gog, Julia R.; Olson, Don; Viboud, Cécile
2016-01-01
While big data have proven immensely useful in fields such as marketing and earth sciences, public health is still relying on more traditional surveillance systems and awaiting the fruits of a big data revolution. A new generation of big data surveillance systems is needed to achieve rapid, flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying on clinical and laboratory reports. We then examine how large-volume medical claims data can, with great spatiotemporal resolution, help elucidate local disease patterns. Finally, we review efforts to develop surveillance systems based on digital and social data streams, including the recent rise and fall of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection which has traditionally been considered a model system for surveillance and modeling. PMID:28830112
ERIC Educational Resources Information Center
Tooley, Melinda
2005-01-01
The different tools that are helpful during the Synthesis stage, their role in boosting students abilities in Synthesis and the way in which it can be customized to meet the needs of each group of students are discussed. Big6 TurboTools offers several tools to help complete the task. In Synthesis stage, these same tools along with Turbo Report and…
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade 4
ERIC Educational Resources Information Center
Boaler, Jo; Munson, Jen; Williams, Cathy
2017-01-01
The most challenging parts of teaching mathematics are engaging students and helping them understand the connections between mathematics concepts. In this volume, you'll find a collection of low floor, high ceiling tasks that will help you do just that, by looking at the big ideas at the fourth-grade level through visualization, play, and…
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
Application and Exploration of Big Data Mining in Clinical Medicine.
Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling
2016-03-20
To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.
Infectious Disease Surveillance in the Big Data Era: Towards Faster and Locally Relevant Systems.
Simonsen, Lone; Gog, Julia R; Olson, Don; Viboud, Cécile
2016-12-01
While big data have proven immensely useful in fields such as marketing and earth sciences, public health is still relying on more traditional surveillance systems and awaiting the fruits of a big data revolution. A new generation of big data surveillance systems is needed to achieve rapid, flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying on clinical and laboratory reports. We then examine how large-volume medical claims data can, with great spatiotemporal resolution, help elucidate local disease patterns. Finally, we review efforts to develop surveillance systems based on digital and social data streams, including the recent rise and fall of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection which has traditionally been considered a model system for surveillance and modeling. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
ERIC Educational Resources Information Center
Chang, Chiung-Sui
2007-01-01
The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the…
Big Data for Infectious Disease Surveillance and Modeling
Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile
2016-01-01
We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. PMID:28830113
Application and Exploration of Big Data Mining in Clinical Medicine
Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling
2016-01-01
Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378
Plasma big endothelin-1 level and the severity of new-onset stable coronary artery disease.
Chen, Juan; Chen, Man-Hua; Guo, Yuan-Lin; Zhu, Cheng-Gang; Xu, Rui-Xia; Dong, Qian; Li, Jian-Jun
2015-01-01
To investigate the usefulness of the plasma big endothelin-1 (big ET-1) level in predicting the severity of new-onset stable angiography-proven coronary artery disease (CAD). A total of 963 consecutive stable CAD patients with more than 50% stenosis in at least one main vessel were enrolled. The patients were classified into the three groups according to the tertile of the Gensini score (GS, low GS <20, n=300; intermediate GS 20-40, n=356 and high GS >40, n=307), and the relationship between the big ET-1 level and GS was evaluated. The plasma levels of big ET-1 increased significantly in association with increases in the GS tertile (p=0.007). A multivariate analysis suggested that the plasma big ET-1 level was an independent predictor for a high GS (OR=2.26, 95%CI: 1.23-4.15, p=0.009), and there was a positive correlation between the big ET-1 level and the GS (r=0.20, p=0.000). The area under the receiver operating characteristic curve (AUC) for the big ET-1 level in predicting a high GS was 0.64 (95% CI 0.60-0.68, p=0.000), and the optimal cutoff value for the plasma big ET-1 level for predicting a high GS was 0.34 fmol/mL, with a sensitivity of 62.6% and specificity of 60.3%. In the high-big ET-1 level group (≥0.34 fmol/mL), there were significantly increased rates of three-vessel disease (43.6% vs. 35.4%, p=0.017) and a high GS [31 (17-54) vs. 24 (16-44), p=0.001] compared with that observed in the low-big ET-1 level group. The present findings indicate that the plasma big ET-1 level is a useful predictor of the severity of new-onset stable CAD associated with significant stenosis.
Gaze entropy reflects surgical task load.
Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés
2016-11-01
Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields. PMID:25383096
Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields.
Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, K.; Iskra, K.; Naik, H.
2011-05-01
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less
Big Data for Infectious Disease Surveillance and Modeling.
Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile
2016-12-01
We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Relational Understanding of the Derivative Concept through Mathematical Modeling: A Case Study
ERIC Educational Resources Information Center
Sahin, Zulal; Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat
2015-01-01
The purpose of this study was to investigate three second-year graduate students' awareness and understanding of the relationships among the "big ideas" that underlie the concept of derivative through modeling tasks and Skemp's distinction between relational and instrumental understanding. The modeling tasks consisting of warm-up,…
Patterns of Tasks, Patterns of Talk: L2 Literacy Building in University Spanish Classes
ERIC Educational Resources Information Center
Gleason, Jesse; Slater, Tammy
2017-01-01
Second language (L2) classroom research has sought to shed light on the processes and practices that develop L2 learners' abilities [Nunan, D. 2004. "Task-based language teaching." London: Continuum; Verplaetse, L. 2014. "Using big questions to apprentice students into language-rich classroom practices." "TESOL…
Galla, Brian M.; Plummer, Benjamin D.; White, Rachel E.; Meketon, David; D’Mello, Sidney K.; Duckworth, Angela L.
2014-01-01
The current study reports on the development and validation of the Academic Diligence Task (ADT), designed to assess the tendency to expend effort on academic tasks which are tedious in the moment but valued in the long-term. In this novel online task, students allocate their time between solving simple math problems (framed as beneficial for problem solving skills) and, alternatively, playing Tetris or watching entertaining videos. Using a large sample of high school seniors (N = 921), the ADT demonstrated convergent validity with self-report ratings of Big Five conscientiousness and its facets, self-control and grit, as well as discriminant validity from theoretically unrelated constructs, such as Big Five extraversion, openness, and emotional stability, test anxiety, life satisfaction, and positive and negative affect. The ADT also demonstrated incremental predictive validity for objectively measured GPA, standardized math and reading achievement test scores, high school graduation, and college enrollment, over and beyond demographics and intelligence. Collectively, findings suggest the feasibility of online behavioral measures to assess noncognitive individual differences that predict academic outcomes. PMID:25258470
Galla, Brian M; Plummer, Benjamin D; White, Rachel E; Meketon, David; D'Mello, Sidney K; Duckworth, Angela L
2014-10-01
The current study reports on the development and validation of the Academic Diligence Task (ADT), designed to assess the tendency to expend effort on academic tasks which are tedious in the moment but valued in the long-term. In this novel online task, students allocate their time between solving simple math problems (framed as beneficial for problem solving skills) and, alternatively, playing Tetris or watching entertaining videos. Using a large sample of high school seniors ( N = 921), the ADT demonstrated convergent validity with self-report ratings of Big Five conscientiousness and its facets, self-control and grit, as well as discriminant validity from theoretically unrelated constructs, such as Big Five extraversion, openness, and emotional stability, test anxiety, life satisfaction, and positive and negative affect. The ADT also demonstrated incremental predictive validity for objectively measured GPA, standardized math and reading achievement test scores, high school graduation, and college enrollment, over and beyond demographics and intelligence. Collectively, findings suggest the feasibility of online behavioral measures to assess noncognitive individual differences that predict academic outcomes.
Lyme disease: the promise of Big Data, companion diagnostics and precision medicine
Stricker, Raphael B; Johnson, Lorraine
2016-01-01
Lyme disease caused by the spirochete Borrelia burgdorferi has become a major worldwide epidemic. Recent studies based on Big Data registries show that >300,000 people are diagnosed with Lyme disease each year in the USA, and up to two-thirds of individuals infected with B. burgdorferi will fail conventional 30-year-old antibiotic therapy for Lyme disease. In addition, animal and human evidence suggests that sexual transmission of the Lyme spirochete may occur. Improved companion diagnostic tests for Lyme disease need to be implemented, and novel treatment approaches are urgently needed to combat the epidemic. In particular, therapies based on the principles of precision medicine could be modeled on successful “designer drug” treatment for HIV/AIDS and hepatitis C virus infection featuring targeted protease inhibitors. The use of Big Data registries, companion diagnostics and precision medicine will revolutionize the diagnosis and treatment of Lyme disease. PMID:27672336
Cell Phones ≠ Self and Other Problems with Big Data Detection and Containment during Epidemics.
Erikson, Susan L
2018-03-08
Evidence from Sierra Leone reveals the significant limitations of big data in disease detection and containment efforts. Early in the 2014-2016 Ebola epidemic in West Africa, media heralded HealthMap's ability to detect the outbreak from newsfeeds. Later, big data-specifically, call detail record data collected from millions of cell phones-was hyped as useful for stopping the disease by tracking contagious people. It did not work. In this article, I trace the causes of big data's containment failures. During epidemics, big data experiments can have opportunity costs: namely, forestalling urgent response. Finally, what counts as data during epidemics must include that coming from anthropological technologies because they are so useful for detection and containment. © 2018 The Authors Medical Anthropology Quarterly published by Wiley Periodicals, Inc. on behalf of American Anthropological Association.
Size matters: bigger is faster.
Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E
2009-06-01
A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.
Effect of Exercise on Motor and Nonmotor Symptoms of Parkinson's Disease
Dashtipour, Khashayar; Johnson, Eric; Kani, Camellia; Kani, Kayvan; Hadi, Ehsan; Ghamsary, Mark; Pezeshkian, Shant; Chen, Jack J.
2015-01-01
Background. Novel rehabilitation strategies have demonstrated potential benefits for motor and non-motor symptoms of Parkinson's disease (PD). Objective. To compare the effects of Lee Silverman Voice Therapy BIG (LSVT BIG therapy) versus a general exercise program (combined treadmill plus seated trunk and limb exercises) on motor and non-motor symptoms of PD. Methods. Eleven patients with early-mid stage PD participated in the prospective, double-blinded, randomized clinical trial. Both groups received 16 one-hour supervised training sessions over 4 weeks. Outcome measures included the Unified Parkinson's Disease Rating Scale (UPDRS), Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI) and Modified Fatigue Impact Scale (MFIS). Five patients performed general exercise and six patients performed LSVT BIG therapy. Post-intervention evaluations were conducted at weeks 4, 12 and 24. Results. The combined cohort made improvements at all follow-up evaluations with statistical significance for UPDRS total and motor, BDI, and MFIS (P < 0.05). Conclusion. This study demonstrated positive effects of general exercise and LSVT BIG therapy on motor and non-motor symptoms of patients with PD. Our results suggest that general exercise may be as effective as LSVT BIG therapy on symptoms of PD for patients not able to readily access outpatient LSVT BIG therapy. PMID:25722915
Technology in Parkinson disease: Challenges and Opportunities
Espay, Alberto J.; Bonato, Paolo; Nahab, Fatta; Maetzler, Walter; Dean, John M.; Klucken, Jochen; Eskofier, Bjoern M.; Merola, Aristide; Horak, Fay; Lang, Anthony E.; Reilmann, Ralf; Giuffrida, Joe; Nieuwboer, Alice; Horne, Malcolm; Little, Max A.; Litvan, Irene; Simuni, Tanya; Dorsey, E. Ray; Burack, Michelle A.; Kubota, Ken; Kamondi, Anita; Godinho, Catarina; Daneault, Jean-Francois; Mitsi, Georgia; Krinke, Lothar; Hausdorff, Jeffery M.; Bloem, Bastiaan R.; Papapetropoulos, Spyros
2016-01-01
The miniaturization, sophistication, proliferation, and accessibility of technologies are enabling the capturing of more and previously inaccessible phenomena in Parkinson disease (PD). However, more information has not translated into greater understanding of disease complexity to satisfy diagnostic and therapeutic needs. Challenges include non-compatible technology platforms, the need for wide-scale and long-term deployment of sensor technology (in particular among vulnerable elderly patients), and the gap between the “big data” acquired with sensitive measurement technologies and their limited clinical application. Major opportunities could be realized if new technologies are developed as part of open-source and/or open-hardware platforms enabling multi-channel data capture, sensitive to the broad range of motor and non-motor problems that characterize PD, and adaptable into self-adjusting, individualized treatment delivery systems. The International Parkinson and Movement Disorders Society Task Force on Technology is entrusted to convene engineers, clinicians, researchers, and patients to promote the development of integrated measurement and closed-loop therapeutic systems with high patient adherence that also serve to: 1) encourage the adoption of clinico-pathophysiologic phenotyping and early detection of critical disease milestones; 2) enhance tailoring of symptomatic therapy; 3) improve subgroup targeting of patients for future testing of disease modifying treatments; and 4) identify objective biomarkers to improve longitudinal tracking of impairments in clinical care and research. This article summarizes the work carried out by the Task Force toward identifying challenges and opportunities in the development of technologies with potential for improving the clinical management and quality of life of individuals with PD. PMID:27125836
HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.
Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael
2017-01-01
Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.
[Big data in official statistics].
Zwick, Markus
2015-08-01
The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.
Some experiences and opportunities for big data in translational research.
Chute, Christopher G; Ullman-Cullere, Mollie; Wood, Grant M; Lin, Simon M; He, Min; Pathak, Jyotishman
2013-10-01
Health care has become increasingly information intensive. The advent of genomic data, integrated into patient care, significantly accelerates the complexity and amount of clinical data. Translational research in the present day increasingly embraces new biomedical discovery in this data-intensive world, thus entering the domain of "big data." The Electronic Medical Records and Genomics consortium has taught us many lessons, while simultaneously advances in commodity computing methods enable the academic community to affordably manage and process big data. Although great promise can emerge from the adoption of big data methods and philosophy, the heterogeneity and complexity of clinical data, in particular, pose additional challenges for big data inferencing and clinical application. However, the ultimate comparability and consistency of heterogeneous clinical information sources can be enhanced by existing and emerging data standards, which promise to bring order to clinical data chaos. Meaningful Use data standards in particular have already simplified the task of identifying clinical phenotyping patterns in electronic health records.
Some experiences and opportunities for big data in translational research
Chute, Christopher G.; Ullman-Cullere, Mollie; Wood, Grant M.; Lin, Simon M.; He, Min; Pathak, Jyotishman
2014-01-01
Health care has become increasingly information intensive. The advent of genomic data, integrated into patient care, significantly accelerates the complexity and amount of clinical data. Translational research in the present day increasingly embraces new biomedical discovery in this data-intensive world, thus entering the domain of “big data.” The Electronic Medical Records and Genomics consortium has taught us many lessons, while simultaneously advances in commodity computing methods enable the academic community to affordably manage and process big data. Although great promise can emerge from the adoption of big data methods and philosophy, the heterogeneity and complexity of clinical data, in particular, pose additional challenges for big data inferencing and clinical application. However, the ultimate comparability and consistency of heterogeneous clinical information sources can be enhanced by existing and emerging data standards, which promise to bring order to clinical data chaos. Meaningful Use data standards in particular have already simplified the task of identifying clinical phenotyping patterns in electronic health records. PMID:24008998
A Unified Approach to Abductive Inference
2014-09-30
learning in “ Big data ” domains. COMBINING MARKOV LOGIC AND SUPPORT VECTOR MACHINES FOR EVENT EXTRACTION Event extraction is the task of...and achieves stateoftheart performance. This makes it an ideal candidate for learning in “ Big data ...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the
Healthcare and the Roles of the Medical Profession in the Big Data Era*1
YAMAMOTO, Yuji
2016-01-01
The accumulation of large amounts of healthcare information is in progress, and society is about to enter the Health Big Data era by linking such data. Medical professionals’ daily tasks in clinical practice have become more complicated due to information overload, accelerated technological development, and the expansion of conceptual frameworks for medical care. Further, their responsibilities are more challenging and their workload is consistently increasing. As medical professionals enter the Health Big Data era, we need to reevaluate the fundamental significance and role of medicine and investigate ways to utilize this available information and technology. For example, a data analysis on diabetes patients has already shed light on the status of accessibility to physicians and the treatment response rate. In time, large amounts of health data will help find solutions including new effective treatment that could not be discovered by conventional means. Despite the vastness of accumulated data and analyses, their interpretation is necessarily conducted by attending physicians who communicate these findings to patients face to face; this task cannot be replaced by technology. As medical professionals, we must take the initiative to evaluate the framework of medicine in the Health Big Data era, study the ideal approach for clinical practitioners within this framework, and spread awareness to the public about our framework and approach while implementing them. PMID:28299246
Curriculum: Big Decisions--Making Healthy, Informed Choices about Sex
ERIC Educational Resources Information Center
Davis, Melanie
2009-01-01
Big Decisions is a 10-lesson abstinence-plus curriculum for ages 12-18 that emphasizes sex as a big decision, abstinence as the healthiest choice, and the mandate that sexually active teens use condoms and be tested for sexually transmitted diseases. This program can be implemented with limited resources and facilitator training when abstinence…
Chang, Chiung-Sui
2007-01-01
The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the adolescents' perceptions about the Big 6 information problem-solving approach. In addition, the adolescents had quite different responses toward different subscales of the approach. Moreover, females tended to have higher quality information-searching skills than their male counterparts. The adolescents of different grades also displayed varying views toward the approach. Other results are also provided.
A novel disease of big-leaf mahogany caused by two Fusarium species in Mexico
USDA-ARS?s Scientific Manuscript database
Swietenia macrophylla is valued for its high-quality wood and use in urban landscapes in Mexico. During surveys of mango-producing areas in the central western region of Mexico, symptoms of malformation, the most important disease of mango in the area, were observed on big-leaf mahogany trees. The o...
Gur, Ruben C; Irani, Farzin; Seligman, Sarah; Calkins, Monica E; Richard, Jan; Gur, Raquel E
2011-08-01
Genomics has been revolutionizing medicine over the past decade by offering mechanistic insights into disease processes and engendering the age of "individualized medicine." Because of the sheer number of measures generated by gene sequencing methods, genomics requires "Big Science" where large datasets on genes are analyzed in reference to electronic medical record data. This revolution has largely bypassed the behavioral neurosciences, mainly because of the paucity of behavioral data in medical records and the labor-intensity of available neuropsychological assessment methods. We describe the development and implementation of an efficient neuroscience-based computerized battery, coupled with a computerized clinical assessment procedure. This assessment package has been applied to a genomic study of 10,000 children aged 8-21, of whom 1000 also undergo neuroimaging. Results from the first 3000 participants indicate sensitivity to neurodevelopmental trajectories. Sex differences were evident, with females outperforming males in memory and social cognition domains, while for spatial processing males were more accurate and faster, and they were faster on simple motor tasks. The study illustrates what will hopefully become a major component of the work of clinical and research neuropsychologists as invaluable participants in the dawning age of Big Science neuropsychological genomics.
Hemingway, Harry; Asselbergs, Folkert W; Danesh, John; Dobson, Richard; Maniadakis, Nikolaos; Maggioni, Aldo; van Thiel, Ghislaine J M; Cronin, Maureen; Brobert, Gunnar; Vardas, Panos; Anker, Stefan D; Grobbee, Diederick E; Denaxas, Spiros
2018-04-21
Cohorts of millions of people's health records, whole genome sequencing, imaging, sensor, societal and publicly available data present a rapidly expanding digital trace of health. We aimed to critically review, for the first time, the challenges and potential of big data across early and late stages of translational cardiovascular disease research. We sought exemplars based on literature reviews and expertise across the BigData@Heart Consortium. We identified formidable challenges including: data quality, knowing what data exist, the legal and ethical framework for their use, data sharing, building and maintaining public trust, developing standards for defining disease, developing tools for scalable, replicable science and equipping the clinical and scientific work force with new inter-disciplinary skills. Opportunities claimed for big health record data include: richer profiles of health and disease from birth to death and from the molecular to the societal scale; accelerated understanding of disease causation and progression, discovery of new mechanisms and treatment-relevant disease sub-phenotypes, understanding health and diseases in whole populations and whole health systems and returning actionable feedback loops to improve (and potentially disrupt) existing models of research and care, with greater efficiency. In early translational research we identified exemplars including: discovery of fundamental biological processes e.g. linking exome sequences to lifelong electronic health records (EHR) (e.g. human knockout experiments); drug development: genomic approaches to drug target validation; precision medicine: e.g. DNA integrated into hospital EHR for pre-emptive pharmacogenomics. In late translational research we identified exemplars including: learning health systems with outcome trials integrated into clinical care; citizen driven health with 24/7 multi-parameter patient monitoring to improve outcomes and population-based linkages of multiple EHR sources for higher resolution clinical epidemiology and public health. High volumes of inherently diverse ('big') EHR data are beginning to disrupt the nature of cardiovascular research and care. Such big data have the potential to improve our understanding of disease causation and classification relevant for early translation and to contribute actionable analytics to improve health and healthcare.
On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.
Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N
2016-04-01
An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.
Who Chokes Under Pressure? The Big Five Personality Traits and Decision-Making under Pressure.
Byrne, Kaileigh A; Silasi-Mansat, Crina D; Worthy, Darrell A
2015-02-01
The purpose of the present study was to examine whether the Big Five personality factors could predict who thrives or chokes under pressure during decision-making. The effects of the Big Five personality factors on decision-making ability and performance under social (Experiment 1) and combined social and time pressure (Experiment 2) were examined using the Big Five Personality Inventory and a dynamic decision-making task that required participants to learn an optimal strategy. In Experiment 1, a hierarchical multiple regression analysis showed an interaction between neuroticism and pressure condition. Neuroticism negatively predicted performance under social pressure, but did not affect decision-making under low pressure. Additionally, the negative effect of neuroticism under pressure was replicated using a combined social and time pressure manipulation in Experiment 2. These results support distraction theory whereby pressure taxes highly neurotic individuals' cognitive resources, leading to sub-optimal performance. Agreeableness also negatively predicted performance in both experiments.
NASA Astrophysics Data System (ADS)
Kastens, K. A.; Malyn-Smith, J.; Ippolito, J.; Krumhansl, R.
2014-12-01
In August of 2014, the Oceans of Data Institute at Education Development Center, Inc. (EDC) is convening an expert panel to begin the process of developing an occupational skills profile for the "big-data-enabled professional." We define such a professional as an "individual who works with large complex data sets on a regular basis, asking and answering questions, analyzing trends, and finding meaningful patterns, in order to increase the efficiency of processes, make decisions and predictions, solve problems, generate hypotheses, and/or develop new understandings." The expert panel includes several geophysicists, as well as data professionals from engineering, higher education, analytical journalism, forensics, bioinformatics, and telecommunications. Working with experienced facilitators, the expert panel will create a detailed synopsis of the tasks and responsibilities characteristic of their profession, as well as the skills, knowledge and behaviors that enable them to succeed in the workplace. After the panel finishes their work, the task matrix and associated narrative will be vetted and validated by a larger group of additional professionals, and then disseminated for use by educators and employers. The process we are using is called DACUM (Developing a Curriculum), adapted by EDC and optimized for emergent professions, such as the "big-data-enabled professional." DACUM is a well-established method for analyzing jobs and occupations, commonly used in technical fields to develop curriculum and training programs that reflect authentic work tasks found in scientific and technical workplaces. The premises behind the DACUM approach are that: expert workers are better able to describe their own occupation than anyone else; any job can be described in terms of the tasks that successful workers in the occupation perform; all tasks have direct implications for the knowledge, skills, understandings and attitudes that must be taught and learned in preparation for the targeted career. At AGU, we will describe the process and present the finalized occupational profile.
Challenges and Opportunities of Big Data in Health Care: A Systematic Review
Goswamy, Rishi; Raval, Yesha; Marawi, Sarah
2016-01-01
Background Big data analytics offers promise in many business sectors, and health care is looking at big data to provide answers to many age-related issues, particularly dementia and chronic disease management. Objective The purpose of this review was to summarize the challenges faced by big data analytics and the opportunities that big data opens in health care. Methods A total of 3 searches were performed for publications between January 1, 2010 and January 1, 2016 (PubMed/MEDLINE, CINAHL, and Google Scholar), and an assessment was made on content germane to big data in health care. From the results of the searches in research databases and Google Scholar (N=28), the authors summarized content and identified 9 and 14 themes under the categories Challenges and Opportunities, respectively. We rank-ordered and analyzed the themes based on the frequency of occurrence. Results The top challenges were issues of data structure, security, data standardization, storage and transfers, and managerial skills such as data governance. The top opportunities revealed were quality improvement, population management and health, early detection of disease, data quality, structure, and accessibility, improved decision making, and cost reduction. Conclusions Big data analytics has the potential for positive impact and global implications; however, it must overcome some legitimate obstacles. PMID:27872036
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
Wilson Disease: Frequently Asked Questions
... Are Wilson's Wilson's Warriors WDA Publications Back Downloads Corporate Sponsorship Forms Membership Forms Resources The Big WOW ... Help Donate Become a Member The Big WOW Corporate Sponsorship Marketplace Contact Us Search Our Site About ...
Monkman, H.; Petersen, C.; Weber, J.; Borycki, E. M.; Adams, S.; Collins, S.
2014-01-01
Summary Objectives While big data offers enormous potential for improving healthcare delivery, many of the existing claims concerning big data in healthcare are based on anecdotal reports and theoretical vision papers, rather than scientific evidence based on empirical research. Historically, the implementation of health information technology has resulted in unintended consequences at the individual, organizational and social levels, but these unintended consequences of collecting data have remained unaddressed in the literature on big data. The objective of this paper is to provide insights into big data from the perspective of people, social and organizational considerations. Method We draw upon the concept of persona to define the digital persona as the intersection of data, tasks and context for different user groups. We then describe how the digital persona can serve as a framework to understanding sociotechnical considerations of big data implementation. We then discuss the digital persona in the context of micro, meso and macro user groups across the 3 Vs of big data. Results We provide insights into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery. We also provide a framework for defining the digital persona at a micro, meso and macro level to help understand the user contexts of big data solutions. Conclusion While big data provides great potential for improving healthcare delivery, it is essential that we consider the individual, social and organizational contexts of data use when implementing big data solutions. PMID:25123726
Kuziemsky, C E; Monkman, H; Petersen, C; Weber, J; Borycki, E M; Adams, S; Collins, S
2014-08-15
While big data offers enormous potential for improving healthcare delivery, many of the existing claims concerning big data in healthcare are based on anecdotal reports and theoretical vision papers, rather than scientific evidence based on empirical research. Historically, the implementation of health information technology has resulted in unintended consequences at the individual, organizational and social levels, but these unintended consequences of collecting data have remained unaddressed in the literature on big data. The objective of this paper is to provide insights into big data from the perspective of people, social and organizational considerations. We draw upon the concept of persona to define the digital persona as the intersection of data, tasks and context for different user groups. We then describe how the digital persona can serve as a framework to understanding sociotechnical considerations of big data implementation. We then discuss the digital persona in the context of micro, meso and macro user groups across the 3 Vs of big data. We provide insights into the potential benefits and challenges of applying big data approaches to healthcare as well as how to position these approaches to achieve health system objectives such as patient safety or patient-engaged care delivery. We also provide a framework for defining the digital persona at a micro, meso and macro level to help understand the user contexts of big data solutions. While big data provides great potential for improving healthcare delivery, it is essential that we consider the individual, social and organizational contexts of data use when implementing big data solutions.
Technology in Parkinson's disease: Challenges and opportunities.
Espay, Alberto J; Bonato, Paolo; Nahab, Fatta B; Maetzler, Walter; Dean, John M; Klucken, Jochen; Eskofier, Bjoern M; Merola, Aristide; Horak, Fay; Lang, Anthony E; Reilmann, Ralf; Giuffrida, Joe; Nieuwboer, Alice; Horne, Malcolm; Little, Max A; Litvan, Irene; Simuni, Tanya; Dorsey, E Ray; Burack, Michelle A; Kubota, Ken; Kamondi, Anita; Godinho, Catarina; Daneault, Jean-Francois; Mitsi, Georgia; Krinke, Lothar; Hausdorff, Jeffery M; Bloem, Bastiaan R; Papapetropoulos, Spyros
2016-09-01
The miniaturization, sophistication, proliferation, and accessibility of technologies are enabling the capture of more and previously inaccessible phenomena in Parkinson's disease (PD). However, more information has not translated into a greater understanding of disease complexity to satisfy diagnostic and therapeutic needs. Challenges include noncompatible technology platforms, the need for wide-scale and long-term deployment of sensor technology (among vulnerable elderly patients in particular), and the gap between the "big data" acquired with sensitive measurement technologies and their limited clinical application. Major opportunities could be realized if new technologies are developed as part of open-source and/or open-hardware platforms that enable multichannel data capture sensitive to the broad range of motor and nonmotor problems that characterize PD and are adaptable into self-adjusting, individualized treatment delivery systems. The International Parkinson and Movement Disorders Society Task Force on Technology is entrusted to convene engineers, clinicians, researchers, and patients to promote the development of integrated measurement and closed-loop therapeutic systems with high patient adherence that also serve to (1) encourage the adoption of clinico-pathophysiologic phenotyping and early detection of critical disease milestones, (2) enhance the tailoring of symptomatic therapy, (3) improve subgroup targeting of patients for future testing of disease-modifying treatments, and (4) identify objective biomarkers to improve the longitudinal tracking of impairments in clinical care and research. This article summarizes the work carried out by the task force toward identifying challenges and opportunities in the development of technologies with potential for improving the clinical management and the quality of life of individuals with PD. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
What Can Big Data Offer the Pharmacovigilance of Orphan Drugs?
Price, John
2016-12-01
The pharmacovigilance of drugs for orphan diseases presents problems related to the small patient population. Obtaining high-quality information on individual reports of suspected adverse reactions is of particular importance for the pharmacovigilance of orphan drugs. The possibility of mining "big data" to detect suspected adverse reactions is being explored in pharmacovigilance generally but may have limited application to orphan drugs. Sources of big data such as social media may be infrequently used as communication channels by patients with rare disease or their caregivers or by health care providers; any adverse reactions identified are likely to reflect what is already known about the safety of the drug from the network of support that grows up around these patients. Opportunities related to potential future big data sources are discussed. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
Differential relations between two dimensions of self-esteem and the Big Five?
Ramsdal, Gro Hilde
2008-08-01
Recent research has suggested the possibility that self-esteem (SE) may be viewed as a two-dimensional concept consisting of: (a) self-liking, the subjective evaluation of oneself as a social being; and (b) self-competence, the internal conceptions of success and failure in performing tasks (Tafarodi & Swann, 1995). Establishing differential relations between these two dimensions of SE and an important psychological concept like the Big Five, would support the notion of two-dimensional SE. To test this hypothesis the self-liking/self-competence scale (SLCS) and the Big Five Inventory (BFI) were administered to 128 Norwegian college students. The results show a differential relationship between the two dimensions of SE and the personality dimensions of the BFI.
Ebersbach, Georg; Ebersbach, Almut; Gandor, Florin; Wegner, Brigitte; Wissel, Jörg; Kupsch, Andreas
2014-05-01
To determine whether physical activity may affect cognitive performance in patients with Parkinson's disease by measuring reaction times in patients participating in the Berlin BIG study. Randomized controlled trial, rater-blinded. Ambulatory care. Patients with mild to moderate Parkinson's disease (N=60) were randomly allocated to 3 treatment arms. Outcome was measured at the termination of training and at follow-up 16 weeks after baseline in 58 patients (completers). Patients received 16 hours of individual Lee Silverman Voice Treatment-BIG training (BIG; duration of treatment, 4wk), 16 hours of group training with Nordic Walking (WALK; duration of treatment, 8wk), or nonsupervised domestic exercise (HOME; duration of instruction, 1hr). Cued reaction time (cRT) and noncued reaction time (nRT). Differences between treatment groups in improvement in reaction times from baseline to intermediate and baseline to follow-up assessments were observed for cRT but not for nRT. Pairwise t test comparisons revealed differences in change in cRT at both measurements between BIG and HOME groups (intermediate: -52ms; 95% confidence interval [CI], -84/-20; P=.002; follow-up: 55ms; CI, -105/-6; P=.030) and between WALK and HOME groups (intermediate: -61ms; CI, -120/-2; P=.042; follow-up: -78ms; CI, -136/-20; P=.010). There was no difference between BIG and WALK groups (intermediate: 9ms; CI, -49/67; P=.742; follow-up: 23ms; CI, -27/72; P=.361). Supervised physical exercise with Lee Silverman Voice Treatment-BIG or Nordic Walking is associated with improvement in cognitive aspects of movement preparation. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Baay, Pieter E; van Aken, Marcel A G; de Ridder, Denise T D; van der Lippe, Tanja
2014-07-01
The school-to-work transition constitutes a central developmental task for adolescents. The role of Big Five personality traits in this has received some scientific attention, but prior research has been inconsistent and paid little attention to mechanisms through which personality traits influence job-search outcomes. The current study proposed that the joint effects of Big Five personality traits and social capital (i.e., available resources through social relations) would shed more light on adolescents' job-search outcomes. Analyses on 685 Dutch vocational training graduates showed that extraversion and emotional stability were related to better job-search outcomes after graduation. Some relations between Big Five personality traits and job-search outcomes were explained by social capital, but no relations were dependent on social capital. Social capital had a direct relation with the number of job offers. Contrary to popular belief, this study shows that Big Five personality traits and social capital relate to job-search outcomes largely independently. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Wildfire and forest disease interaction lead to greater loss of soil nutrients and carbon
Richard C. Cobb; Ross K. Meentemeyer; David M. Rizzo
2016-01-01
Fire and forest disease have significant ecological impacts, but the interactions of these two disturbances are rarely studied. We measured soil C, N, Ca, P, and pH in forests of the Big Sur region of California impacted by the exotic pathogen Phytophthora ramorum, cause of sudden oak death, and the 2008 Basin wildfire complex. In Big Sur,...
Matsuzawa, Daisuke; Shirayama, Yukihiko; Niitsu, Tomihisa; Hashimoto, Kenji; Iyo, Masaomi
2015-03-03
Defective decision-making is a symptom of impaired cognitive function observed in patients with schizophrenia. Impairment on the Iowa Gambling Task (IGT) has been reported in patients with schizophrenia, but these results are inconsistent among studies. We differentiated subjects based on whether they expressed certainty at having deciphered an advantageous strategy in the course of the task. We investigated this impairment using the IGT in patients with schizophrenia and performed analysis different to standard advantageous decks minus disadvantageous decks in all 100 card choices, [C+D]-[A+B](1-100). We examined the effects on behavior after receiving a big penalty. Results were dependent on participants utilizing with or without certainty, the best strategy for positive gain. Schizophrenic patients without certainty failed to show card choice shift, from disadvantageous to advantageous decks. Differences in card choices on the IGT were clearly shown between patients with schizophrenia and normal controls by the use of improvement from block 1 to blocks 3-5, [C+D]-[A+B]([41-100]-[1-20]) (P<0.001), rather than by the composite value of blocks 3-5, [C+D]-[A+B](41-100) (P=0.011). The deficit of emotion-based learning in schizophrenia without uncertainty were related to scores on the SANS and S5 attention. In addition, S1 affective flattering and S4 anhedonia-asociality were also related to these deficits. For a while, normal controls showed a smooth shift from disadvantageous to advantageous decks after big penalties, with or without a certainty for strategy. However, patients with schizophrenia failed to show switching from disadvantageous to advantageous decks, even after big penalties, under the same conditions. Our results highlight certainty of strategy and behavior after a big penalty, as two points of difference between patients with schizophrenia and normal controls in the accumulation of net scores. Copyright © 2014 Elsevier Inc. All rights reserved.
1994-07-21
for the task’s type. The address of an ADA KRN DEFS.TASK ATTR T record is the argument of the piagma and- is passed to the underlying microkernel at...task creation. The task attributes are microkernel dependent. See ada krn defs.a in standard for the type definition of TASK ATTR T and the different...ENDIAN, BIG _4DIAN BYTE-ORDER: constant BYTE ORDER T :- LITTLEENDIAN; type L4 _ADDRESS is private; NO_ LONG_ADDR : constant 10CG_ADDRESS; function
Gergei, Ingrid; Krämer, Bernhard K; Scharnagl, Hubert; Stojakovic, Tatjana; März, Winfried; Mondorf, Ulrich
The endothelin system (Big-ET-1) is a key regulator in cardiovascular (CV) disease and congestive heart failure (CHF). We have examined the incremental value of Big-ET-1 in predicting total and CV mortality next to the well-established CV risk marker N-Terminal Pro-B-Type Natriuretic Peptide (NT-proBNP). Big-ET-1 and NT-proBNP were determined in 2829 participants referred for coronary angiography (follow-up 9.9 years). Big-ET-1 is an independent predictor of total, CV mortality and death due to CHF. The conjunct use of Big-ET-1 and NT-proBNP improves the risk stratification of patients with intermediate to high risk of CV death and CHF. Big-ET-1improves risk stratification in patients referred for coronary angiography.
Challenges and Opportunities of Big Data in Health Care: A Systematic Review.
Kruse, Clemens Scott; Goswamy, Rishi; Raval, Yesha; Marawi, Sarah
2016-11-21
Big data analytics offers promise in many business sectors, and health care is looking at big data to provide answers to many age-related issues, particularly dementia and chronic disease management. The purpose of this review was to summarize the challenges faced by big data analytics and the opportunities that big data opens in health care. A total of 3 searches were performed for publications between January 1, 2010 and January 1, 2016 (PubMed/MEDLINE, CINAHL, and Google Scholar), and an assessment was made on content germane to big data in health care. From the results of the searches in research databases and Google Scholar (N=28), the authors summarized content and identified 9 and 14 themes under the categories Challenges and Opportunities, respectively. We rank-ordered and analyzed the themes based on the frequency of occurrence. The top challenges were issues of data structure, security, data standardization, storage and transfers, and managerial skills such as data governance. The top opportunities revealed were quality improvement, population management and health, early detection of disease, data quality, structure, and accessibility, improved decision making, and cost reduction. Big data analytics has the potential for positive impact and global implications; however, it must overcome some legitimate obstacles. ©Clemens Scott Kruse, Rishi Goswamy, Yesha Raval, Sarah Marawi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 21.11.2016.
Big Data and Dementia: Charting the Route Ahead for Research, Ethics, and Policy
Ienca, Marcello; Vayena, Effy; Blasimme, Alessandro
2018-01-01
Emerging trends in pervasive computing and medical informatics are creating the possibility for large-scale collection, sharing, aggregation and analysis of unprecedented volumes of data, a phenomenon commonly known as big data. In this contribution, we review the existing scientific literature on big data approaches to dementia, as well as commercially available mobile-based applications in this domain. Our analysis suggests that big data approaches to dementia research and care hold promise for improving current preventive and predictive models, casting light on the etiology of the disease, enabling earlier diagnosis, optimizing resource allocation, and delivering more tailored treatments to patients with specific disease trajectories. Such promissory outlook, however, has not materialized yet, and raises a number of technical, scientific, ethical, and regulatory challenges. This paper provides an assessment of these challenges and charts the route ahead for research, ethics, and policy. PMID:29468161
Big Data and Dementia: Charting the Route Ahead for Research, Ethics, and Policy.
Ienca, Marcello; Vayena, Effy; Blasimme, Alessandro
2018-01-01
Emerging trends in pervasive computing and medical informatics are creating the possibility for large-scale collection, sharing, aggregation and analysis of unprecedented volumes of data, a phenomenon commonly known as big data. In this contribution, we review the existing scientific literature on big data approaches to dementia, as well as commercially available mobile-based applications in this domain. Our analysis suggests that big data approaches to dementia research and care hold promise for improving current preventive and predictive models, casting light on the etiology of the disease, enabling earlier diagnosis, optimizing resource allocation, and delivering more tailored treatments to patients with specific disease trajectories. Such promissory outlook, however, has not materialized yet, and raises a number of technical, scientific, ethical, and regulatory challenges. This paper provides an assessment of these challenges and charts the route ahead for research, ethics, and policy.
1974-08-01
have been collected in and around the watershed area. Eight of these were species of the genus Aedes and one was Culiseta inornata, a known carrier...needing or See Table 3 663,00C- possibly needing control Other Sources Land runoff Snow mlt and storm water Water courses ditches and 57,000 carrying...than those for the lower portion. 3. Ground Water a. Supply: Part of the water from rain, melting snow , wet- lands, lakes, and streams in the Big Stone
Enabling breakthroughs in Parkinson’s disease with wearable technologies and big data analytics
Cohen, Shahar; Martig, Adria K.
2016-01-01
Parkinson’s disease (PD) is a progressive, degenerative disorder of the central nervous system that is diagnosed and measured clinically by the Unified Parkinson’s Disease Rating Scale (UPDRS). Tools for continuous and objective monitoring of PD motor symptoms are needed to complement clinical assessments of symptom severity to further inform PD therapeutic development across several arenas, from developing more robust clinical trial outcome measures to establishing biomarkers of disease progression. The Michael J. Fox Foundation for Parkinson’s Disease Research and Intel Corporation have joined forces to develop a mobile application and an Internet of Things (IoT) platform to support large-scale studies of objective, continuously sampled sensory data from people with PD. This platform provides both population and per-patient analyses, measuring gait, activity level, nighttime activity, tremor, as well as other structured assessments and tasks. All data collected will be available to researchers on an open-source platform. Development of the IoT platform raised a number of engineering considerations, including wearable sensor choice, data management and curation, and algorithm validation. This project has successfully demonstrated proof of concept that IoT platforms, wearable technologies and the data they generate offer exciting possibilities for more robust, reliable, and low-cost research methodologies and patient care strategies. PMID:28293596
Enabling breakthroughs in Parkinson's disease with wearable technologies and big data analytics.
Cohen, Shahar; Bataille, Lauren R; Martig, Adria K
2016-01-01
Parkinson's disease (PD) is a progressive, degenerative disorder of the central nervous system that is diagnosed and measured clinically by the Unified Parkinson's Disease Rating Scale (UPDRS). Tools for continuous and objective monitoring of PD motor symptoms are needed to complement clinical assessments of symptom severity to further inform PD therapeutic development across several arenas, from developing more robust clinical trial outcome measures to establishing biomarkers of disease progression. The Michael J. Fox Foundation for Parkinson's Disease Research and Intel Corporation have joined forces to develop a mobile application and an Internet of Things (IoT) platform to support large-scale studies of objective, continuously sampled sensory data from people with PD. This platform provides both population and per-patient analyses, measuring gait, activity level, nighttime activity, tremor, as well as other structured assessments and tasks. All data collected will be available to researchers on an open-source platform. Development of the IoT platform raised a number of engineering considerations, including wearable sensor choice, data management and curation, and algorithm validation. This project has successfully demonstrated proof of concept that IoT platforms, wearable technologies and the data they generate offer exciting possibilities for more robust, reliable, and low-cost research methodologies and patient care strategies.
Infant botulism and indications for administration of botulism immune globulin.
Pifko, Elysha; Price, Amanda; Sterner, Sarah
2014-02-01
Infant botulism is caused by the ingestion of Clostridium botulinum spores and leads to a life-threatening descending motor weakness and flaccid paralysis in infant children. This disease presents with symptoms such as constipation, weakness, and hypotonia and can lead to respiratory failure. Botulism immune globulin (BIG) was created to treat this deadly disease and functions by neutralizing all systemically circulating botulism toxins. It is indicated in children with clinically diagnosed infant botulism, before diagnostic confirmation, and has been shown to lead to a significant reduction in intensive care unit and hospital stay for these patients. This review article discusses the epidemiology, clinical presentation, history of BIG, and indications for administration of BIG.
Big data and ophthalmic research.
Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B
2016-01-01
Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.
Big geo data surface approximation using radial basis functions: A comparative study
NASA Astrophysics Data System (ADS)
Majdisova, Zuzana; Skala, Vaclav
2017-12-01
Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.
Creative thinking and Big Five factors of personality measured in Italian schoolchildren.
De Caroli, Maria Elvira; Sagone, Elisabetta
2009-12-01
This study examined the relations of creative thinking with Big Five factors of personality and the differences by sex and age on creativity. A sample of Italian schoolchildren (56 boys, 56 girls), between 8 to 10 years of age, completed the Test of Creative Thinking and the Big Five Questionnaire for Children. Analysis of results indicated that older children obtained significantly higher scores than the younger ones on Elaboration and Production of titles. Girls obtained significantly higher scores than boys on Originality and Elaboration. The results suggested a modest and negative relation of Flexibility with Conscientiousness and Production of titles with Emotional instability. These findings support the need to explore the connection between creativity and personality with developmental age by means of multiple tasks for evaluating creative thinking.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhou, Bing-Yang; Guo, Yuan-Lin; Wu, Na-Qiong; Zhu, Cheng-Gang; Gao, Ying; Qing, Ping; Li, Xiao-Lin; Wang, Yao; Dong, Qian; Liu, Geng; Xu, Rui Xia; Cui, Chuan-Jue; Sun, Jing; Li, Jian-Jun
2017-03-01
Big endothelin-1 (ET-1) has been proposed as a novel prognostic indicator of acute coronary syndrome, while its predicting role of cardiovascular outcomes in patients with stable coronary artery disease (CAD) is unclear. A total of 3154 consecutive patients with stable CAD were enrolled and followed up for 24months. The outcomes included all-cause death, non-fatal myocardial infarction, stroke and unplanned revascularization (percutaneous coronary intervention and coronary artery bypass grafting). Baseline big ET-1 was measured using sandwich enzyme immunoassay method. Cox proportional hazard regression analysis and Kaplan-Meier analysis were used to evaluate the prognostic value of big ET-1 on cardiovascular outcomes. One hundred and eighty-nine (5.99%) events occurred during follow-up. Patients were divided into two groups: events group (n=189) and non-events group (n=2965). The results indicated that the events group had higher levels of big ET-1 compared to non-events group. Multivariable Cox proportional hazard regression analysis showed that big ET-1 was positively and statistically correlated with clinical outcomes (Hazard Ratio: 1.656, 95% confidence interval: 1.099-2.496, p=0.016). Additionally, the Kaplan-Meier analysis revealed that patients with higher big ET-1 presented lower event-free survival (p=0.016). The present study firstly suggests that big ET-1 is an independent risk marker of cardiovascular outcomes in patients with stable CAD. And more studies are needed to confirm our findings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Shim, Hongseok; Kim, Ji Hyun; Kim, Chan Yeong; Hwang, Sohyun; Kim, Hyojin; Yang, Sunmo; Lee, Ji Eun; Lee, Insuk
2016-11-16
Whole exome sequencing (WES) accelerates disease gene discovery using rare genetic variants, but further statistical and functional evidence is required to avoid false-discovery. To complement variant-driven disease gene discovery, here we present function-driven disease gene discovery in zebrafish (Danio rerio), a promising human disease model owing to its high anatomical and genomic similarity to humans. To facilitate zebrafish-based function-driven disease gene discovery, we developed a genome-scale co-functional network of zebrafish genes, DanioNet (www.inetbio.org/danionet), which was constructed by Bayesian integration of genomics big data. Rigorous statistical assessment confirmed the high prediction capacity of DanioNet for a wide variety of human diseases. To demonstrate the feasibility of the function-driven disease gene discovery using DanioNet, we predicted genes for ciliopathies and performed experimental validation for eight candidate genes. We also validated the existence of heterozygous rare variants in the candidate genes of individuals with ciliopathies yet not in controls derived from the UK10K consortium, suggesting that these variants are potentially involved in enhancing the risk of ciliopathies. These results showed that an integrated genomics big data for a model animal of diseases can expand our opportunity for harnessing WES data in disease gene discovery. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Caie, Peter D; Harrison, David J
2016-01-01
The field of pathology is rapidly transforming from a semiquantitative and empirical science toward a big data discipline. Large data sets from across multiple omics fields may now be extracted from a patient's tissue sample. Tissue is, however, complex, heterogeneous, and prone to artifact. A reductionist view of tissue and disease progression, which does not take this complexity into account, may lead to single biomarkers failing in clinical trials. The integration of standardized multi-omics big data and the retention of valuable information on spatial heterogeneity are imperative to model complex disease mechanisms. Mathematical modeling through systems pathology approaches is the ideal medium to distill the significant information from these large, multi-parametric, and hierarchical data sets. Systems pathology may also predict the dynamical response of disease progression or response to therapy regimens from a static tissue sample. Next-generation pathology will incorporate big data with systems medicine in order to personalize clinical practice for both prognostic and predictive patient care.
Gur, Ruben C.; Irani, Farzin; Seligman, Sarah; Calkins, Monica E.; Richard, Jan; Gur, Raquel E.
2014-01-01
Genomics has been revolutionizing medicine over the past decade by offering mechanistic insights into disease processes and harboring the age of “individualized medicine.” Because of the sheer number of measures generated by gene sequencing methods, genomics requires “Big Science” where large datasets on genes are analyzed in reference to electronic medical record data. This revolution has largely bypassed the behavioral neurosciences, mainly because of the paucity of behavioral data in medical records and the labor intensity of available neuropsychological assessment methods. We describe the development and implementation of an efficient neuroscience-based computerized battery, coupled with a computerized clinical assessment procedure. This assessment package has been applied to a genomic study of 10,000 children aged 8-21, of whom 1000 also undergo neuroimaging. Results from the first 3000 participants indicate sensitivity to neurodevelopmental trajectories. Sex differences were evident, with females outperforming males in memory and social cognition domains, while for spatial processing males were more accurate and faster, and they were faster on simple motor tasks. The study illustrates what will hopefully become a major component of the work of clinical and research neuropsychologists as invaluable participants in the dawning age of Big Science neuropsychological genomics. PMID:21902564
Woody fuels reduction in Wyoming big sagebrush communities
USDA-ARS?s Scientific Manuscript database
Wyoming big sagebrush (Artemisia tridentata Nutt. ssp. wyomingensis Beetle & Young) ecosystems historically have been subject to disturbances that reduce or remove shrubs primarily by fire, although insect outbreaks and disease have also been important. Depending on site productivity, fire return in...
Ashenhurst, James R; Bujarski, Spencer; Jentsch, J David; Ray, Lara A
2014-08-01
The relationship between risk-taking behavior and substance dependence has proven to be complex, particularly when examining across participants expressing a range of substance use problem severity. While main indices of risk-taking in the Balloon Analogue Risk Task (BART) positively associate with problematic alcohol use in adolescent populations (e.g., MacPherson, Magidson, Reynolds, Kahler, & Lejuez, 2010), several studies have observed a negative relationship when examining behavior within adult substance using populations (Ashenhurst, Jentsch, & Ray, 2011; Campbell, Samartgis, & Crowe, 2013). To examine potential mechanisms that underlie this negative relationship, we implemented multilevel regression models on trial-by-trial BART data gathered from 295 adult problem drinkers. These models accounted for participant behavior on trials following balloon bursts or cash outs as indices of loss and reward reactivity, respectively, and included control variables including age, IQ, and individual delay discounting rate. Results revealed that individual trial pumping was significantly predicted by trial number, and by whether or not the previous trial was a big burst or a big cash out (i.e., large magnitude of potential gains) in a manner consistent with a "near-miss" effect. Furthermore, severity of alcohol problems moderated the effect of a previous trial big burst, but not of a big cash out, on subsequent trial behavior such that those with greater severity demonstrated relative insensitivity to this "near-miss" effect. These results extend previous studies suggesting that alcohol abusers are less risky on the BART by specifying a mechanism underlying this pattern, namely, diminished reactivity to large magnitude losses.
Epidemiology in wonderland: Big Data and precision medicine.
Saracci, Rodolfo
2018-03-01
Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2015-03-01
In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
A survey on platforms for big data analytics.
Singh, Dilpreet; Reddy, Chandan K
The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
Big Data’s Role in Precision Public Health
Dolley, Shawn
2018-01-01
Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091
Big Data's Role in Precision Public Health.
Dolley, Shawn
2018-01-01
Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.
Huang, T; Li, L M
2018-05-10
The era of medical big data, translational medicine and precision medicine brings new opportunities for the study of etiology of chronic complex diseases. How to implement evidence-based medicine, translational medicine and precision medicine are the challenges we are facing. Systems epidemiology, a new field of epidemiology, combines medical big data with system biology and examines the statistical model of disease risk, the future risk simulation and prediction using the data at molecular, cellular, population, social and ecological levels. Due to the diversity and complexity of big data sources, the development of study design and analytic methods of systems epidemiology face new challenges and opportunities. This paper summarizes the theoretical basis, concept, objectives, significances, research design and analytic methods of systems epidemiology and its application in the field of public health.
Tick-borne Diseases: The Big Two | NIH MedlinePlus the Magazine
... been a tick bite. Photo: CDC/James Gathany Lyme disease Lyme disease is the most common tick-borne disease in ... nervous system can develop in patients with late Lyme disease. Lyme disease has different stages. The rash is ...
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
Umar, M; Amer, M A; Al-Saleh, M A; Al-Shahwan, I M; Shakeel, M T; Zakri, A M; Katis, N I
2017-07-01
During 2014 and 2015, 97 lettuce plants that showed big-vein-disease-like symptoms and seven weed plants were collected from the Riyadh region. DAS-ELISA revealed that 25% and 9% of the lettuce plants were singly infected with LBVaV and MiLBVV, respectively, whereas 63% had a mixed infection with both viruses. The results were confirmed by multiplex reverse transcription polymerase chain reaction using primers specific for LBVaV and MiLBVV. LBVaV and MiLBVV were also detected in Sonchus oleraceus and Eruca sativa, respectively. The nucleotide sequence of LBVaV and MiLBVV Saudi isolates ranged from 94.3-100%, and their similarities to isolates with sequences in the GenBank database ranged from 93.9 to 99.6% and 93.8 to 99.3%, respectively. Olpidium sp. was present in the roots of lettuce plants with big-vein disease and it was shown to facilitate transmission of both viruses.
NASA Astrophysics Data System (ADS)
Peng, Cuixin
Students growing and being educated in different social background may perform differently in their learning process. These differences can be found in self-regulated behavior in fulfilling a certain task. This paper focuses on the differences of students' various growing and educational environment in motivation and self-regulated learning. Results reveal that there exist differences among students from big cities, middle and small town and countryside in motivational and self-regulated learning. It also indicates that students from big cities gain more knowledge of cognitive strategies in there learning process.
Big Data Application in Biomedical Research and Health Care: A Literature Review.
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
2016-01-01
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care.
Big Data Application in Biomedical Research and Health Care: A Literature Review
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
2016-01-01
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care. PMID:26843812
Conservation seeding and diverse seed species performance
USDA-ARS?s Scientific Manuscript database
The rehabilitation of degraded big sagebrush (Artemisia spp.) communities infested with cheatgrass (Bromus tectorum) and other competitive weeds is a daunting task facing resource managers and land owners. In an effort to improve wildlife and livestock forage on degraded rangelands, the USDA-ARS-Gr...
Scalability and Validation of Big Data Bioinformatics Software.
Yang, Andrian; Troup, Michael; Ho, Joshua W K
2017-01-01
This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.
Implications of pleiotropy: challenges and opportunities for mining Big Data in biomedicine.
Yang, Can; Li, Cong; Wang, Qian; Chung, Dongjun; Zhao, Hongyu
2015-01-01
Pleiotropy arises when a locus influences multiple traits. Rich GWAS findings of various traits in the past decade reveal many examples of this phenomenon, suggesting the wide existence of pleiotropic effects. What underlies this phenomenon is the biological connection among seemingly unrelated traits/diseases. Characterizing the molecular mechanisms of pleiotropy not only helps to explain the relationship between diseases, but may also contribute to novel insights concerning the pathological mechanism of each specific disease, leading to better disease prevention, diagnosis and treatment. However, most pleiotropic effects remain elusive because their functional roles have not been systematically examined. A systematic investigation requires availability of qualified measurements at multilayered biological processes (e.g., transcription and translation). The rise of Big Data in biomedicine, such as high-quality multi-omics data, biomedical imaging data and electronic medical records of patients, offers us an unprecedented opportunity to investigate pleiotropy. There will be a great need of computationally efficient and statistically rigorous methods for integrative analysis of these Big Data in biomedicine. In this review, we outline many opportunities and challenges in methodology developments for systematic analysis of pleiotropy, and highlight its implications on disease prevention, diagnosis and treatment.
Peterson, Jordan B.
2018-01-01
Although performance feedback is widely employed as a means to improve motivation, the efficacy and reliability of performance feedback is often obscured by individual differences and situational variables. The joint role of these moderating variables remains unknown. Accordingly, we investigate how the motivational impact of feedback is moderated by personality and task-difficulty. Utilizing three samples (total N = 916), we explore how Big Five personality traits moderate the motivational impact of false positive and negative feedback on playful, neutral, and frustrating puzzle tasks, respectively. Conscientious and Neurotic individuals together appear particularly sensitive to task difficulty, becoming significantly more motivated by negative feedback on playful tasks and demotivated by negative feedback on frustrating tasks. Results are discussed in terms of Goal-Setting and Self Determination Theory. Implications for industry and education are considered. PMID:29787593
[Utilization of Big Data in Medicine and Future Outlook].
Kinosada, Yasutomi; Uematsu, Machiko; Fujiwara, Takuya
2016-03-01
"Big data" is a new buzzword. The point is not to be dazzled by the volume of data, but rather to analyze it, and convert it into insights, innovations, and business value. There are also real differences between conventional analytics and big data. In this article, we show some results of big data analysis using open DPC (Diagnosis Procedure Combination) data in areas of the central part of JAPAN: Toyama, Ishikawa, Fukui, Nagano, Gifu, Aichi, Shizuoka, and Mie Prefectures. These 8 prefectures contain 51 medical administration areas called the second medical area. By applying big data analysis techniques such as k-means, hierarchical clustering, and self-organizing maps to DPC data, we can visualize the disease structure and detect similarities or variations among the 51 second medical areas. The combination of a big data analysis technique and open DPC data is a very powerful method to depict real figures on patient distribution in Japan.
Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong
2015-07-01
This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.
Big data analytics to improve cardiovascular care: promise and challenges.
Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M
2016-06-01
The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.
[Medical big data and precision medicine: prospects of epidemiology].
Song, J; Hu, Y H
2016-08-10
Since the development of high-throughput technology, electronic medical record system and big data technology, the value of medical data has caused more attention. On the other hand, the proposal of Precision Medicine Initiative opens up the prospect for medical big data. As a Tool-related Discipline, Epidemiology is, focusing on exploitation the resources of existing big data and promoting the integration of translational research and knowledge to completely unlocking the "black box" of exposure-disease continuum. It also tries to accelerating the realization of the ultimate goal on precision medicine. The overall purpose, however is to translate the evidence from scientific research to improve the health of the people.
Using Big Data to Discover Diagnostics and Therapeutics for Gastrointestinal and Liver Diseases
Wooden, Benjamin; Goossens, Nicolas; Hoshida, Yujin; Friedman, Scott L.
2016-01-01
Technologies such as genome sequencing, gene expression profiling, proteomic and metabolomic analyses, electronic medical records, and patient-reported health information have produced large amounts of data, from various populations, cell types, and disorders (big data). However, these data must be integrated and analyzed if they are to produce models or concepts about physiologic function or mechanisms of pathogenesis. Many of these data are available to the public, allowing researchers anywhere to search for markers of specific biologic processes or therapeutic targets for specific diseases or patient types. We review recent advances in the fields of computational and systems biology, and highlight opportunities for researchers to use big data sets in the fields of gastroenterology and hepatology, to complement traditional means of diagnostic and therapeutic discovery. PMID:27773806
Examining Big Brother's Purpose for Using Electronic Performance Monitoring
ERIC Educational Resources Information Center
Bartels, Lynn K.; Nordstrom, Cynthia R.
2012-01-01
We examined whether the reason offered for electronic performance monitoring (EPM) influenced participants' performance, stress, motivation, and satisfaction. Participants performed a data-entry task in one of five experimental conditions. In one condition, participants were not electronically monitored. In the remaining conditions, participants…
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Bat habitat research. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, B.L.; Bosworth, W.R.; Doering, R.W.
This progress report describes activities over the current reporting period to characterize the habitats of bats on the INEL. Research tasks are entitled Monitoring bat habitation of caves on the INEL to determine species present, numbers, and seasons of use; Monitor bat use of man-made ponds at the INEL to determine species present and rates of use of these waters; If the Big Lost River is flowing on the INEL and/or if the Big Lost River sinks contain water, determine species present, numbers and seasons of use; Determine the habitat requirement of Townsend`s big-eared bats, including the microclimate of cavesmore » containing Townsend`s big-eared bats as compared to other caves that do not contain bats; Determine and describe an economical and efficient bat census technique to be used periodically by INEL scientists to determine the status of bats on the INEL; and Provide a suggestive management and protective plan for bat species on the INEL that might, in the future, be added to the endangered and sensitive list;« less
Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A
2017-03-01
Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
2014-06-30
lesson learned through exploring current data with the ForceNet tool is that the tool (as implemented thus far) is able to give analysts a big ...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...Twitter data and on the development and implementation of tools to support this task; these include a Group Builder, a Force-directed Graph tool, and a
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
Influence of mental practice on development of voluntary control of a novel motor acquisition task.
Creelman, Jim
2003-08-01
The purpose of this investigation was to assess whether mental practice facilitates the development of voluntary control over the recruitment of the abductor hallucis muscle to produce isolated big toe abduction. A sample of convenience of 15 women and 20 men with a mean age of 28.8 yr. (SD=5.7) and healthy feet, who were unable voluntarily to abduct the big toe, were randomly assigned to one of three groups, a mental practice group, a physical practice group, and a group who performed a control movement during practice. Each subject received neuromuscular electrical stimulation to introduce the desired movement prior to each of five practice bouts over a single session lasting 2 hr. Big toe abduction active range of motion and surface electromyographic (EMG) output of the abductor hallucis and extensor digitorum brevis muscles were measured prior to the first practice bout and following each practice bout, yielding seven acquisition trials. Acquisition is defined as an improvement in both active range of motion and in the difference between the integrated EMG of the abductor hallucis and extensor digitorum brevis muscles during successive acquisition trials. Seven members of both the mental and physical practice groups and one member of the control group met the acquisition criteria. Chi-square analysis indicated the group difference was statistically significant, suggesting mental practice was effective for this task.
2017-01-01
Reform of drug procurement is being extensively implemented and expanded in China, especially in today's big data environment. However, the pattern of supply mode innovation lags behind procurement improvement. Problems in financial strain and supply break frequently occur, which affect the stability of drug supply. Drug Pooling System is proposed and applied in a few pilot cities to resolve these problems. From the perspective of supply chain, this study analyzes the process of setting important parameters and sets out the tasks of involved parties in a pooling system according to the issues identified in the pilot run. The approach is based on big data analysis and simulation using system dynamic theory and modeling of Vensim software to optimize system performance. This study proposes a theoretical framework to resolve problems and attempts to provide a valuable reference for future application of pooling systems. PMID:28293258
An Exercise in Exploring Big Data for Producing Reliable Statistical Information.
Rey-Del-Castillo, Pilar; Cardeñosa, Jesús
2016-06-01
The availability of copious data about many human, social, and economic phenomena is considered an opportunity for the production of official statistics. National statistical organizations and other institutions are more and more involved in new projects for developing what is sometimes seen as a possible change of paradigm in the way statistical figures are produced. Nevertheless, there are hardly any systems in production using Big Data sources. Arguments of confidentiality, data ownership, representativeness, and others make it a difficult task to get results in the short term. Using Call Detail Records from Ivory Coast as an illustration, this article shows some of the issues that must be dealt with when producing statistical indicators from Big Data sources. A proposal of a graphical method to evaluate one specific aspect of the quality of the computed figures is also presented, demonstrating that the visual insight provided improves the results obtained using other traditional procedures.
Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P
2017-01-01
Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.
A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus
NASA Astrophysics Data System (ADS)
Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.
2017-11-01
The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.
Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.
2016-01-01
Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614
On Darwin's science and its contexts.
Hodge, M J S
2014-01-01
The notions of 'the Darwinian revolution' and of 'the scientific Revolution' are no longer unproblematic; so this paper does not construe its task as relating these two items to each other. There can be big-picture and long-run history even when that task is declined. Such history has to be done pluralistically. Relating Darwin's science to Newton's science is one kind of historiographical challenge; relating Darwin's science to seventeenth-century finance capitalism is another kind. Relating Darwin's science to long-run traditions and transitions is a different kind of task from relating his science to the immediate short-run contexts. Copyright © 2014 Elsevier Ltd. All rights reserved.
Choi, Jungran; Park, Hyojin; Lee, Choong-Hyun
2014-06-01
With the enormous increase in the amount of data, the concept of big data has emerged and this allows us to gain new insights and appreciate its value. However, analysis related to gastrointestinal diseases in the viewpoint of the big data has not been performed yet in Korea. This study analyzed the data of the blog's visitors as a set of big data to investigate questions they did not mention in the clinical situation. We analyzed the blog of a professor whose subspecialty is gastroenterology at Gangnam Severance Hospital. We assessed the changes in the number of visitors, access path of visitors, and the queries from January 2011 to December 2013. A total of 50,084 visitors gained accessed to the blog. An average of 1,535.3 people visited the blog per month and 49.5 people per day. The number of visitors and the cumulative number of registered posts showed a positive correlation. The most utilized access path of visitors to the website was blog.iseverance.com (42.2%), followed by Google (32.8%) and Daum (6.6%). The most searched term by the visitors in the blog was intestinal metaplasia (16.6%), followed by dizziness (8.3%) and gastric submucosal tumor (7.0%). Personal blog can function as a communication route for patients with digestive diseases. The most frequently searched word necessitating explanation and education was 'intestinal metaplasia'. Identifying and analyzing even unstructured data as a set of big data is expected to provide meaningful information.
Some big ideas for some big problems.
Winter, D D
2000-05-01
Although most psychologists do not see sustainability as a psychological problem, our environmental predicament is caused largely by human behaviors, accompanied by relevant thoughts, feelings, attitudes, and values. The huge task of building sustainable cultures will require a great many psychologists from a variety of backgrounds. In an effort to stimulate the imaginations of a wide spectrum of psychologists to take on the crucial problem of sustainability, this article discusses 4 psychological approaches (neo-analytic, behavioral, social, and cognitive) and outlines some of their insights into environmentally relevant behavior. These models are useful for illuminating ways to increase environmentally responsible behaviors of clients, communities, and professional associations.
Mardis, Elaine R
2016-05-01
The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.
-Omic and Electronic Health Record Big Data Analytics for Precision Medicine.
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D; Venugopalan, Janani; Hoffman, Ryan; Wang, May D
2017-02-01
Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of healthcare. In this paper, we present -omic and EHR data characteristics, associated challenges, and data analytics including data preprocessing, mining, and modeling. To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Big data analytics is able to address -omic and EHR data challenges for paradigm shift toward precision medicine. Big data analytics makes sense of -omic and EHR data to improve healthcare outcome. It has long lasting societal impact.
Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology
Salazar, Brittany M.; Balczewski, Emily A.; Ung, Choong Yong; Zhu, Shizhen
2016-01-01
Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring “big data” applications in pediatric oncology. Computational strategies derived from big data science–network- and machine learning-based modeling and drug repositioning—hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which “big data” and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases. PMID:28035989
Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology.
Salazar, Brittany M; Balczewski, Emily A; Ung, Choong Yong; Zhu, Shizhen
2016-12-27
Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring "big data" applications in pediatric oncology. Computational strategies derived from big data science-network- and machine learning-based modeling and drug repositioning-hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which "big data" and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.
Using Big Data to Discover Diagnostics and Therapeutics for Gastrointestinal and Liver Diseases.
Wooden, Benjamin; Goossens, Nicolas; Hoshida, Yujin; Friedman, Scott L
2017-01-01
Technologies such as genome sequencing, gene expression profiling, proteomic and metabolomic analyses, electronic medical records, and patient-reported health information have produced large amounts of data from various populations, cell types, and disorders (big data). However, these data must be integrated and analyzed if they are to produce models or concepts about physiological function or mechanisms of pathogenesis. Many of these data are available to the public, allowing researchers anywhere to search for markers of specific biological processes or therapeutic targets for specific diseases or patient types. We review recent advances in the fields of computational and systems biology and highlight opportunities for researchers to use big data sets in the fields of gastroenterology and hepatology to complement traditional means of diagnostic and therapeutic discovery. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Wang, Tao; Zhang, Jiahai; Zhang, Xuecheng; Xu, Chao; Tu, Xiaoming
2013-01-01
Streptococcus pneumoniae is a pathogen causing acute respiratory infection, otitis media and some other severe diseases in human. In this study, the solution structure of a bacterial immunoglobulin-like (Big) domain from a putative S. pneumoniae surface protein SP0498 was determined by NMR spectroscopy. SP0498 Big domain adopts an eight-β-strand barrel-like fold, which is different in some aspects from the two-sheet sandwich-like fold of the canonical Ig-like domains. Intriguingly, we identified that the SP0498 Big domain was a Ca2+ binding domain. The structure of the Big domain is different from those of the well known Ca2+ binding domains, therefore revealing a novel Ca2+-binding module. Furthermore, we identified the critical residues responsible for the binding to Ca2+. We are the first to report the interactions between the Big domain and Ca2+ in terms of structure, suggesting an important role of the Big domain in many essential calcium-dependent cellular processes such as pathogenesis. PMID:23326635
Visualizing the knowledge structure and evolution of big data research in healthcare informatics.
Gu, Dongxiao; Li, Jingjing; Li, Xingguo; Liang, Changyong
2017-02-01
In recent years, the literature associated with healthcare big data has grown rapidly, but few studies have used bibliometrics and a visualization approach to conduct deep mining and reveal a panorama of the healthcare big data field. To explore the foundational knowledge and research hotspots of big data research in the field of healthcare informatics, this study conducted a series of bibliometric analyses on the related literature, including papers' production trends in the field and the trend of each paper's co-author number, the distribution of core institutions and countries, the core literature distribution, the related information of prolific authors and innovation paths in the field, a keyword co-occurrence analysis, and research hotspots and trends for the future. By conducting a literature content analysis and structure analysis, we found the following: (a) In the early stage, researchers from the United States, the People's Republic of China, the United Kingdom, and Germany made the most contributions to the literature associated with healthcare big data research and the innovation path in this field. (b) The innovation path in healthcare big data consists of three stages: the disease early detection, diagnosis, treatment, and prognosis phase, the life and health promotion phase, and the nursing phase. (c) Research hotspots are mainly concentrated in three dimensions: the disease dimension (e.g., epidemiology, breast cancer, obesity, and diabetes), the technical dimension (e.g., data mining and machine learning), and the health service dimension (e.g., customized service and elderly nursing). This study will provide scholars in the healthcare informatics community with panoramic knowledge of healthcare big data research, as well as research hotspots and future research directions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A New Data Access Mechanism for HDFS
NASA Astrophysics Data System (ADS)
Li, Qiang; Sun, Zhenyu; Wei, Zhanchen; Sun, Gongxing
2017-10-01
With the era of big data emerging, Hadoop has become the de facto standard of big data processing platform. However, it is still difficult to get legacy applications, such as High Energy Physics (HEP) applications, to run efficiently on Hadoop platform. There are two reasons which lead to the difficulties mentioned above: firstly, random access is not supported on Hadoop File System (HDFS), secondly, it is difficult to make legacy applications adopt to HDFS streaming data processing mode. In order to address the two issues, a new read and write mechanism of HDFS is proposed. With this mechanism, data access is done on the local file system instead of through HDFS streaming interfaces. To enable files modified by users, three attributes including permissions, owner and group are imposed on Block objects. Blocks stored on Datanodes have the same attributes as the file they are owned by. Users can modify blocks when the Map task running locally, and HDFS is responsible to update the rest replicas later after the block modification finished. To further improve the performance of Hadoop system, a complete localization task execution mechanism is implemented for I/O intensive jobs. Test results show that average CPU utilization is improved by 10% with the new task selection strategy, data read and write performances are improved by about 10% and 30% separately.
A big data approach for climate change indicators processing in the CLIP-C project
NASA Astrophysics Data System (ADS)
D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni
2016-04-01
Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.
Quiz: Alzheimer's Disease | NIH MedlinePlus the Magazine
... with mild Alzheimer's disease may be helped in day-to-day living by a list of daily plans notes ... help some people with mild Alzheimer's disease with day-to-day living. A big calendar, a list ...
Automatic finger joint synovitis localization in ultrasound images
NASA Astrophysics Data System (ADS)
Nurzynska, Karolina; Smolka, Bogdan
2016-04-01
A long-lasting inflammation of joints results between others in many arthritis diseases. When not cured, it may influence other organs and general patients' health. Therefore, early detection and running proper medical treatment are of big value. The patients' organs are scanned with high frequency acoustic waves, which enable visualization of interior body structures through an ultrasound sonography (USG) image. However, the procedure is standardized, different projections result in a variety of possible data, which should be analyzed in short period of time by a physician, who is using medical atlases as a guidance. This work introduces an efficient framework based on statistical approach to the finger joint USG image, which enables automatic localization of skin and bone regions, which are then used for localization of the finger joint synovitis area. The processing pipeline realizes the task in real-time and proves high accuracy when compared to annotation prepared by the expert.
Dynamic whole-body robotic manipulation
NASA Astrophysics Data System (ADS)
Abe, Yeuhi; Stephens, Benjamin; Murphy, Michael P.; Rizzi, Alfred A.
2013-05-01
The creation of dynamic manipulation behaviors for high degree of freedom, mobile robots will allow them to accomplish increasingly difficult tasks in the field. We are investigating how the coordinated use of the body, legs, and integrated manipulator, on a mobile robot, can improve the strength, velocity, and workspace when handling heavy objects. We envision that such a capability would aid in a search and rescue scenario when clearing obstacles from a path or searching a rubble pile quickly. Manipulating heavy objects is especially challenging because the dynamic forces are high and a legged system must coordinate all its degrees of freedom to accomplish tasks while maintaining balance. To accomplish these types of manipulation tasks, we use trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning trajectories in a 13 dimensional space. We apply the Covariance Matrix Adaptation (CMA) algorithm to solve for trajectories that optimize task performance while also obeying important constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate desired feed-forward body forces and foot step locations, which enable tracking on the robot. Some hardware results for cinderblock throwing are demonstrated on the BigDog quadruped platform augmented with a human-arm-like manipulator. The results are analogous to how a human athlete maximizes distance in the discus event by performing a precise sequence of choreographed steps.
Reading Strategies for Students with Mild Disabilities
ERIC Educational Resources Information Center
Boyle, Joseph R.
2008-01-01
Teaching children with mild disabilities to read can be a challenging task for even the most seasoned teacher. In order to be successful, teachers need to be knowledgeable about the big five of reading: phonemic awareness, phonics, fluency, vocabulary, and comprehension (National Reading Panel, 2000). While the ultimate goal of reading is…
The Big Book of Library Grant Money.
ERIC Educational Resources Information Center
Taft Group, Rockville, MD.
Libraries facing diminishing budgets and increasing demand for services must explore all funding sources, especially the more than $6 billion available in annual foundation and corporate giving. The easier and greater access to information on prospective givers provided simplifies this task. It profiles 1,471 library grant givers, compiled from…
Reducing Racial Disparities in Breast Cancer Care: The Role of 'Big Data'.
Reeder-Hayes, Katherine E; Troester, Melissa A; Meyer, Anne-Marie
2017-10-15
Advances in a wide array of scientific technologies have brought data of unprecedented volume and complexity into the oncology research space. These novel big data resources are applied across a variety of contexts-from health services research using data from insurance claims, cancer registries, and electronic health records, to deeper and broader genomic characterizations of disease. Several forms of big data show promise for improving our understanding of racial disparities in breast cancer, and for powering more intelligent and far-reaching interventions to close the racial gap in breast cancer survival. In this article we introduce several major types of big data used in breast cancer disparities research, highlight important findings to date, and discuss how big data may transform breast cancer disparities research in ways that lead to meaningful, lifesaving changes in breast cancer screening and treatment. We also discuss key challenges that may hinder progress in using big data for cancer disparities research and quality improvement.
Alemzadeh, E; Izadpanah, K
2012-12-01
Mirafiori lettuce big vein virus (MiLBVV) and lettuce big vein associated virus (LBVaV) were found in association with big vein disease of lettuce in Iran. Analysis of part of the coat protein (CP) gene of Iranian isolates of LBVaV showed 97.1-100 % nucleotide sequence identity with other LBVaV isolates. Iranian isolates of MiLBVV belonged to subgroup A and showed 88.6-98.8 % nucleotide sequence identity with other isolates of this virus when amplified by PCR primer pair MiLV VP. The occurrence of both viruses in lettuce crop was associated with the presence of resting spores and zoosporangia of the fungus Olpidium brassicae in lettuce roots under field and greenhouse conditions. Two months after sowing lettuce seed in soil collected from a lettuce field with big vein affected plants, all seedlings were positive for LBVaV and MiLBVV, indicating soil transmission of both viruses.
Connolly, Martin J; Broad, Joanna B; Boyd, Michal; Zhang, Tony Xian; Kerse, Ngaire; Foster, Susan; Lumley, Thomas; Whitehead, Noeline
2016-05-01
long-term care (LTC) residents have higher hospitalisation rates than non-LTC residents. Rapid decline may follow hospitalisations, hence the importance of preventing unnecessary hospitalisations. Literature describes diagnosis-specific interventions (for cardiac failure, ischaemic heart disease, chronic obstructive pulmonary disease, stroke, pneumonia-termed 'big five' diagnoses), impacting on hospitalisations of older community-dwellers, but few RCTs show reductions in acute admissions from LTC. LTC facilities with higher than expected hospitalisations were recruited for a cluster-randomised controlled trial (RCT) of facility-based complex, non-disease-specific, 9-month intervention comprising gerontology nurse specialist (GNS)-led staff education, facility benchmarking, GNS resident review and multidisciplinary discussion of residents selected using standard criteria. In this post hoc exploratory analysis, the outcome was acute hospitalisations for 'big five' diagnoses. Re-randomisation analyses were used for end points during months 1-14. For end points during months 4-14, proportional hazards models are adjusted for within-facility clustering. we recruited 36 facilities with 1,998 residents (1,408 female; mean age 82.9 years); 1,924 were alive at 3 months. The intervention did not impact overall rates of acute hospitalisations or mortality (previously published), but resulted in fewer 'big five' admissions (RR = 0.73, 95% CI = 0.54-0.99; P = 0.043) with no significant difference in the rate of other acute admissions. When considering events occurring after 3 months (only), the intervention group were 34.7% (HR = 0.65; 95% CI = 0.49-0.88; P = 0.005) less likely to have a 'big five' acute admission than controls, with no differences in likelihood of acute admissions for other diagnoses (P = 0.96). this generic intervention may reduce admissions for common conditions which the literature shows are impacted by disease-specific admission reduction strategies. © The Author 2016. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Partnership between small biotech and big pharma.
Wiederrecht, Gregory J; Hill, Raymond G; Beer, Margaret S
2006-08-01
The process involved in the identification and development of novel breakthrough medicines at big pharma has recently undergone significant changes, in part because of the extraordinary complexity that is associated with tackling diseases of high unmet need, and also because of the increasingly demanding requirements that have been placed on the pharmaceutical industry by investors and regulatory authorities. In addition, big pharma no longer have a monopoly on the tools and enabling technologies that are required to identify and discover new drugs, as many biotech companies now also have these capabilities. As a result, researchers at biotech companies are able to identify credible drug leads, as well as compounds that have the potential to become marketed medicinal products. This diversification of companies that are involved in drug discovery and development has in turn led to increased partnering interactions between the biotech sector and big pharma. This article examines how Merck and Co Inc, which has historically relied on a combination of internal scientific research and licensed products, has poised itself to become further engaged in partnering with biotech companies, as well as academic institutions, to increase the probability of success associated with identifying novel medicines to treat unmet medical needs--particularly in areas such as central nervous system disorders, obesity/metabolic diseases, atheroma and cancer, and also to cultivate its cardiovascular, respiratory, arthritis, bone, ophthalmology and infectious disease franchises.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
Quantification of sudden oak death tree mortality in the Big Sur ecoregion of California
Douglas A. Shoemaker; Christopher B. Oneal; David M. Rizzo; Ross K. Meentemeyer
2008-01-01
Big Sur is one of the most ecologically diverse regions in California and well recognized as a biodiversity hotspot for global conservation priority. Currently the region is experiencing substantial environmental change due to the invasion of Phytophthora ramorum, the plant pathogen causing the forest disease known as sudden oak death. First...
-Omic and Electronic Health Records Big Data Analytics for Precision Medicine
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.
2017-01-01
Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470
Big Data and the Global Public Health Intelligence Network (GPHIN)
Dion, M; AbdelMalik, P; Mawudeku, A
2015-01-01
Background Globalization and the potential for rapid spread of emerging infectious diseases have heightened the need for ongoing surveillance and early detection. The Global Public Health Intelligence Network (GPHIN) was established to increase situational awareness and capacity for the early detection of emerging public health events. Objective To describe how the GPHIN has used Big Data as an effective early detection technique for infectious disease outbreaks worldwide and to identify potential future directions for the GPHIN. Findings Every day the GPHIN analyzes over more than 20,000 online news reports (over 30,000 sources) in nine languages worldwide. A web-based program aggregates data based on an algorithm that provides potential signals of emerging public health events which are then reviewed by a multilingual, multidisciplinary team. An alert is sent out if a potential risk is identified. This process proved useful during the Severe Acute Respiratory Syndrome (SARS) outbreak and was adopted shortly after by a number of countries to meet new International Health Regulations that require each country to have the capacity for early detection and reporting. The GPHIN identified the early SARS outbreak in China, was credited with the first alert on MERS-CoV and has played a significant role in the monitoring of the Ebola outbreak in West Africa. Future developments are being considered to advance the GPHIN’s capacity in light of other Big Data sources such as social media and its analytical capacity in terms of algorithm development. Conclusion The GPHIN’s early adoption of Big Data has increased global capacity to detect international infectious disease outbreaks and other public health events. Integration of additional Big Data sources and advances in analytical capacity could further strengthen the GPHIN’s capability for timely detection and early warning. PMID:29769954
Pollett, Simon; Althouse, Benjamin M; Forshey, Brett; Rutherford, George W; Jarman, Richard G
2017-11-01
Internet-based surveillance methods for vector-borne diseases (VBDs) using "big data" sources such as Google, Twitter, and internet newswire scraping have recently been developed, yet reviews on such "digital disease detection" methods have focused on respiratory pathogens, particularly in high-income regions. Here, we present a narrative review of the literature that has examined the performance of internet-based biosurveillance for diseases caused by vector-borne viruses, parasites, and other pathogens, including Zika, dengue, other arthropod-borne viruses, malaria, leishmaniasis, and Lyme disease across a range of settings, including low- and middle-income countries. The fundamental features, advantages, and drawbacks of each internet big data source are presented for those with varying familiarity of "digital epidemiology." We conclude with some of the challenges and future directions in using internet-based biosurveillance for the surveillance and control of VBD.
Pollett, Simon; Althouse, Benjamin M.; Forshey, Brett; Rutherford, George W.; Jarman, Richard G.
2017-01-01
Internet-based surveillance methods for vector-borne diseases (VBDs) using “big data” sources such as Google, Twitter, and internet newswire scraping have recently been developed, yet reviews on such “digital disease detection” methods have focused on respiratory pathogens, particularly in high-income regions. Here, we present a narrative review of the literature that has examined the performance of internet-based biosurveillance for diseases caused by vector-borne viruses, parasites, and other pathogens, including Zika, dengue, other arthropod-borne viruses, malaria, leishmaniasis, and Lyme disease across a range of settings, including low- and middle-income countries. The fundamental features, advantages, and drawbacks of each internet big data source are presented for those with varying familiarity of “digital epidemiology.” We conclude with some of the challenges and future directions in using internet-based biosurveillance for the surveillance and control of VBD. PMID:29190281
Strength in Numbers: Using Big Data to Simplify Sentiment Classification.
Filippas, Apostolos; Lappas, Theodoros
2017-09-01
Sentiment classification, the task of assigning a positive or negative label to a text segment, is a key component of mainstream applications such as reputation monitoring, sentiment summarization, and item recommendation. Even though the performance of sentiment classification methods has steadily improved over time, their ever-increasing complexity renders them comprehensible by only a shrinking minority of expert practitioners. For all others, such highly complex methods are black-box predictors that are hard to tune and even harder to justify to decision makers. Motivated by these shortcomings, we introduce BigCounter: a new algorithm for sentiment classification that substitutes algorithmic complexity with Big Data. Our algorithm combines standard data structures with statistical testing to deliver accurate and interpretable predictions. It is also parameter free and suitable for use virtually "out of the box," which makes it appealing for organizations wanting to leverage their troves of unstructured data without incurring the significant expense of creating in-house teams of data scientists. Finally, BigCounter's efficient and parallelizable design makes it applicable to very large data sets. We apply our method on such data sets toward a study on the limits of Big Data for sentiment classification. Our study finds that, after a certain point, predictive performance tends to converge and additional data have little benefit. Our algorithmic design and findings provide the foundations for future research on the data-over-computation paradigm for classification problems.
Medical big data: promise and challenges.
Lee, Choong Ho; Yoon, Hyung-Jin
2017-03-01
The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.
Medical big data: promise and challenges
Lee, Choong Ho; Yoon, Hyung-Jin
2017-01-01
The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994
Big data and clinicians: a review on the state of the science.
Wang, Weiqi; Krishnan, Eswar
2014-01-17
In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data.
Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming
2016-03-01
A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release.
Ren, Guomin; Krawetz, Roman
2015-01-01
The data explosion in the last decade is revolutionizing diagnostics research and the healthcare industry, offering both opportunities and challenges. These high-throughput "omics" techniques have generated more scientific data in the last few years than in the entire history of mankind. Here we present a brief summary of how "big data" have influenced early diagnosis of complex diseases. We will also review some of the most commonly used "omics" techniques and their applications in diagnostics. Finally, we will discuss the issues brought by these new techniques when translating laboratory discoveries to clinical practice.
Can Chimpanzees ("Pan troglodytes") Discriminate Appearance from Reality?
ERIC Educational Resources Information Center
Krachun, Carla; Call, Josep; Tomasello, Michael
2009-01-01
A milestone in human development is coming to recognize that how something looks is not necessarily how it is. We tested appearance-reality understanding in chimpanzees ("Pan troglodytes") with a task requiring them to choose between a small grape and a big grape. The apparent relative size of the grapes was reversed using magnifying and…
Using Library Resources and Technology to Develop Global and Collaborative Workspaces
ERIC Educational Resources Information Center
Shepherd, Sonya S.
2012-01-01
Information literacy is defined as a "set of skills needed to find, retrieve, analyze, and use information" (ACRL, 2011). Similarly, the "Big6®" consists of (i) defining the task, (ii) defining strategies for seeking information, (iii) locating and accessing information, (iv) knowing how to use the information found, (v)…
Little Words, Big Impact: Determiners Begin to Bootstrap Reference by 12 Months
ERIC Educational Resources Information Center
Kedar, Yarden; Casasola, Marianella; Lust, Barbara; Parmet, Yisrael
2017-01-01
We tested 12- and 18-month-old English-learning infants on a preferential-looking task which contrasted grammatically correct sentences using the determiner "the" vs. three ungrammatical conditions in which "the" was substituted by another English function word, a nonsense word, or omitted. Our design involved strict controls…
van Hest, N A; Aldridge, R W; de Vries, G; Sandgren, A; Hauer, B; Hayward, A; Arrazola de Oñate, W; Haas, W; Codecasa, L R; Caylà, J A; Story, A; Antoine, D; Gori, A; Quabeck, L; Jonsson, J; Wanlin, M; Orcau, Å; Rodes, A; Dedicoat, M; Antoun, F; van Deutekom, H; Keizer, St; Abubakar, I
2014-03-06
In low-incidence countries in the European Union (EU), tuberculosis (TB) is concentrated in big cities, especially among certain urban high-risk groups including immigrants from TB high-incidence countries, homeless people, and those with a history of drug and alcohol misuse. Elimination of TB in European big cities requires control measures focused on multiple layers of the urban population. The particular complexities of major EU metropolises, for example high population density and social structure, create specific opportunities for transmission, but also enable targeted TB control interventions, not efficient in the general population, to be effective or cost effective. Lessons can be learnt from across the EU and this consensus statement on TB control in big cities and urban risk groups was prepared by a working group representing various EU big cities, brought together on the initiative of the European Centre for Disease Prevention and Control. The consensus statement describes general and specific social, educational, operational, organisational, legal and monitoring TB control interventions in EU big cities, as well as providing recommendations for big city TB control, based upon a conceptual TB transmission and control model.
Sändig, Sonja; Schnitzler, Hans-Ulrich; Denzinger, Annette
2014-08-15
Four big brown bats (Eptesicus fuscus) were challenged in an obstacle avoidance experiment to localize vertically stretched wires requiring progressively greater accuracy by diminishing the wire-to-wire distance from 50 to 10 cm. The performance of the bats decreased with decreasing gap size. The avoidance task became very difficult below a wire separation of 30 cm, which corresponds to the average wingspan of E. fuscus. Two of the bats were able to pass without collisions down to a gap size of 10 cm in some of the flights. The other two bats only managed to master gap sizes down to 20 and 30 cm, respectively. They also performed distinctly worse at all other gap sizes. With increasing difficulty of the task, the bats changed their flight and echolocation behaviour. Especially at gap sizes of 30 cm and below, flight paths increased in height and flight speed was reduced. In addition, the bats emitted approach signals that were arranged in groups. At all gap sizes, the largest numbers of pulses per group were observed in the last group before passing the obstacle. The more difficult the obstacle avoidance task, the more pulses there were in the groups and the shorter the within-group pulse intervals. In comparable situations, the better-performing bats always emitted groups with more pulses than the less well-performing individuals. We hypothesize that the accuracy of target localization increases with the number of pulses per group and that each group is processed as a package. © 2014. Published by The Company of Biologists Ltd.
Big-Data-Driven Stem Cell Science and Tissue Engineering: Vision and Unique Opportunities.
Del Sol, Antonio; Thiesen, Hans J; Imitola, Jaime; Carazo Salas, Rafael E
2017-02-02
Achieving the promises of stem cell science to generate precise disease models and designer cell samples for personalized therapeutics will require harnessing pheno-genotypic cell-level data quantitatively and predictively in the lab and clinic. Those requirements could be met by developing a Big-Data-driven stem cell science strategy and community. Copyright © 2017 Elsevier Inc. All rights reserved.
Using 'big data' to validate claims made in the pharmaceutical approval process.
Wasser, Thomas; Haynes, Kevin; Barron, John; Cziraky, Mark
2015-01-01
Big Data in the healthcare setting refers to the storage, assimilation, and analysis of large quantities of information regarding patient care. These data can be collected and stored in a wide variety of ways including electronic medical records collected at the patient bedside, or through medical records that are coded and passed to insurance companies for reimbursement. When these data are processed it is possible to validate claims as a part of the regulatory review process regarding the anticipated performance of medications and devices. In order to analyze properly claims by manufacturers and others, there is a need to express claims in terms that are testable in a timeframe that is useful and meaningful to formulary committees. Claims for the comparative benefits and costs, including budget impact, of products and devices need to be expressed in measurable terms, ideally in the context of submission or validation protocols. Claims should be either consistent with accessible Big Data or able to support observational studies where Big Data identifies target populations. Protocols should identify, in disaggregated terms, key variables that would lead to direct or proxy validation. Once these variables are identified, Big Data can be used to query massive quantities of data in the validation process. Research can be passive or active in nature. Passive, where the data are collected retrospectively; active where the researcher is prospectively looking for indicators of co-morbid conditions, side-effects or adverse events, testing these indicators to determine if claims are within desired ranges set forth by the manufacturer. Additionally, Big Data can be used to assess the effectiveness of therapy through health insurance records. This, for example, could indicate that disease or co-morbid conditions cease to be treated. Understanding the basic strengths and weaknesses of Big Data in the claim validation process provides a glimpse of the value that this research can provide to industry. Big Data can support a research agenda that focuses on the process of claims validation to support formulary submissions as well as inputs to ongoing disease area and therapeutic class reviews.
Big Data and Clinicians: A Review on the State of the Science
Wang, Weiqi
2014-01-01
Background In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. Objective The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. Methods We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. Results This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Conclusions Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data. PMID:25600256
Getting the Big Picture: Development of Spatial Scaling Abilities
ERIC Educational Resources Information Center
Frick, Andrea; Newcombe, Nora S.
2012-01-01
Spatial scaling is an integral aspect of many spatial tasks that involve symbol-to-referent correspondences (e.g., map reading, drawing). In this study, we asked 3-6-year-olds and adults to locate objects in a two-dimensional spatial layout using information from a second spatial representation (map). We examined how scaling factor and reference…
The Strategic Attitude: Integrating Strategic Planning into Daily University Worklife
ERIC Educational Resources Information Center
Dickmeyer, Nathan
2004-01-01
Chief financial officers in today's universities are so busy with the challenges of day-to-day management that strategic thinking often takes a back seat. Planning for strategic change can go a long way toward streamlining the very daily tasks that obscure the "big picture." Learning how to integrate strategic thinking into day-to-day management…
ERIC Educational Resources Information Center
Smidt, Wilfried
2015-01-01
Academic success in early childhood teacher education is important because it provides a foundation for occupational development in terms of professional competence, the quality of educational practices, as well as career success. Consequently, identifying factors that can explain differences in academic success is an important research task.…
New Schools for Older Neighborhoods: Strategies for Building Our Communities' Most Important Assets.
ERIC Educational Resources Information Center
Kauth, Ann
The case studies in this booklet highlight how five communities, in big cities and small towns, overcame the obstacles inherent in creating good new schools in existing neighborhoods. These studies illustrate the creativity that people across the United States have brought to the task of creating new schools in older neighborhoods. There is…
What Can Student Work Show? From Playing a Game to Exploring Probability Theory
ERIC Educational Resources Information Center
Taylor, Merilyn; Hawera, Ngarewa
2016-01-01
Rich learning tasks embedded within a familiar context allow students to work like mathematicians while making sense of the mathematics. This article demonstrates how 11-12 year-old students were able to employ all of the proficiency strands while demonstrating a deep understanding of some of the "big ideas" of probabilistic thinking.
Personalized Medicine: What's in it for Rare Diseases?
Schee Genannt Halfmann, Sebastian; Mählmann, Laura; Leyens, Lada; Reumann, Matthias; Brand, Angela
2017-01-01
Personalised Medicine has become a reality over the last years. The emergence of 'omics' and big data has started revolutionizing healthcare. New 'omics' technologies lead to a better molecular characterization of diseases and a new understanding of the complexity of diseases. The approach of PM is already successfully applied in different healthcare areas such as oncology, cardiology, nutrition and for rare diseases. However, health systems across the EU are often still promoting the 'one-size fits all' approach, even if it is known that patients do greatly vary in their molecular characteristics and response to drugs and other interventions. To make use of the full potentials of PM in the next years ahead several challenges need to be addressed such as the integration of big data, patient empowerment, translation of basic to clinical research, bringing the innovation to the market and shaping sustainable healthcare systems.
Mewes, H W
2013-05-01
Psychiatric diseases provoke human tragedies. Asocial behaviour, mood imbalance, uncontrolled affect, and cognitive malfunction are the price for the biological and social complexity of neurobiology. To understand the etiology and to influence the onset and progress of mental diseases remains of upmost importance, but despite the much improved care for the patients, more then 100 years of research have not succeeded to understand the basic disease mechanisms and enabling rationale treatment. With the advent of the genome based technologies, much hope has been created to join the various dimension of -omics data to uncover the secrets of mental illness. Big Data as generated by -omics do not come with explanations. In this essay, I will discuss the inherent, not well understood methodological foundations and problems that seriously obstacle in striving for a quick success and may render lucky strikes impossible. © Georg Thieme Verlag KG Stuttgart · New York.
Balasar, Mehmet; Sönmez, Mehmet Giray; Oltulu, Pembe; Kandemir, Abdülkadir; Kılıç, Mehmet; Gürbüz, Recai
2017-01-01
Xanthogranulomatous cystitis (XC) is a very rare chronic benign inflammatory disease of the bladder. It may cause local invasion although it is not a malign lesion and may occur together with malign lesions. It has a clinical importance as the distinction from malign lesions is difficult clinically and pathologically. Sharing a 37-year-old female case with giant XC imitating bladder tumor referring to the hospital with hematuria and stomach ache, together with current literature, we wanted to present that the disease can be treated with bladder-preserving approaches instead of radical approaches even though the mass is big in these cases. Application of basic excision and partial resection for small masses and radical cystectomy for large masses was reported in literature. We think that our case may provide a contribution to literature in treatment approach since we provided surgical cure with partial resection in a big mass with dimensions of 9 cm × 8 cm which is different from the present literature. Even though XC is a rare disease, it should be considered in prediagnosis for especially big dimensioned masses, and treatment should be planned according to the pathology result after together with cystoscopy in suitable patients.
ESO Science Outreach Network in Poland during 2011-2013
NASA Astrophysics Data System (ADS)
Czart, Krzysztof
2014-12-01
ESON Poland works since 2010. One of the main tasks of the ESO Science Outreach Network (ESON) is translation of various materials at ESO website, as well as contacts with journalists. We support also science festivals, conferences, contests, exhibitions, astronomy camps and workshops and other educational and outreach activities. During 2011-2013 we supported events like ESO Astronomy Camp 2013, ESO Industry Days in Warsaw, Warsaw Science Festival, Torun Festival of Science and Art, international astronomy olympiad held in Poland and many others. Among big tasks there was also translation of over 60 ESOcast movies.
Trifirò, Gianluca; Sultana, Janet; Bate, Andrew
2018-02-01
In the last decade 'big data' has become a buzzword used in several industrial sectors, including but not limited to telephony, finance and healthcare. Despite its popularity, it is not always clear what big data refers to exactly. Big data has become a very popular topic in healthcare, where the term primarily refers to the vast and growing volumes of computerized medical information available in the form of electronic health records, administrative or health claims data, disease and drug monitoring registries and so on. This kind of data is generally collected routinely during administrative processes and clinical practice by different healthcare professionals: from doctors recording their patients' medical history, drug prescriptions or medical claims to pharmacists registering dispensed prescriptions. For a long time, this data accumulated without its value being fully recognized and leveraged. Today big data has an important place in healthcare, including in pharmacovigilance. The expanding role of big data in pharmacovigilance includes signal detection, substantiation and validation of drug or vaccine safety signals, and increasingly new sources of information such as social media are also being considered. The aim of the present paper is to discuss the uses of big data for drug safety post-marketing assessment.
Clinical research of traditional Chinese medicine in big data era.
Zhang, Junhua; Zhang, Boli
2014-09-01
With the advent of big data era, our thinking, technology and methodology are being transformed. Data-intensive scientific discovery based on big data, named "The Fourth Paradigm," has become a new paradigm of scientific research. Along with the development and application of the Internet information technology in the field of healthcare, individual health records, clinical data of diagnosis and treatment, and genomic data have been accumulated dramatically, which generates big data in medical field for clinical research and assessment. With the support of big data, the defects and weakness may be overcome in the methodology of the conventional clinical evaluation based on sampling. Our research target shifts from the "causality inference" to "correlativity analysis." This not only facilitates the evaluation of individualized treatment, disease prediction, prevention and prognosis, but also is suitable for the practice of preventive healthcare and symptom pattern differentiation for treatment in terms of traditional Chinese medicine (TCM), and for the post-marketing evaluation of Chinese patent medicines. To conduct clinical studies involved in big data in TCM domain, top level design is needed and should be performed orderly. The fundamental construction and innovation studies should be strengthened in the sections of data platform creation, data analysis technology and big-data professionals fostering and training.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Data Mining Citizen Science Results
NASA Astrophysics Data System (ADS)
Borne, K. D.
2012-12-01
Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.
Péter, Annamária; Hegyi, András; Stenroth, Lauri; Finni, Taija; Cronin, Neil J
2015-09-18
Large forces are generated under the big toe in the push-off phase of walking. The largest flexor muscle of the big toe is the flexor hallucis longus (FHL), which likely contributes substantially to these forces. This study examined FHL function at different levels of isometric plantarflexion torque and in the push-off phase at different speeds of walking. FHL and calf muscle activity were measured with surface EMG and plantar pressure was recorded with pressure insoles. FHL activity was compared to the activity of the calf muscles. Force and impulse values were calculated under the big toe, and were compared to the entire pressed area of the insole to determine the relative contribution of big toe flexion forces to the ground reaction force. FHL activity increased with increasing plantarflexion torque level (F=2.8, P=0.024) and with increasing walking speed (F=11.608, P<0.001). No differences were observed in the relative contribution of the force under the big toe to the entire sole between different plantarflexion torque levels (F=0.836, P=0.529). On the contrary, in the push-off phase of walking, peak force under the big toe increased at a higher rate than force under the other areas of the plantar surface (F=3.801, P=0.018), implying a greater relative contribution to total force at faster speeds. Moreover, substantial differences were found between isometric plantarflexion and walking concerning FHL activity relative to that of the calf muscles, highlighting the task-dependant behaviour of FHL. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unlocking the Power of Big Data at the National Institutes of Health.
Coakley, Meghan F; Leerkes, Maarten R; Barnett, Jason; Gabrielian, Andrei E; Noble, Karlynn; Weber, M Nick; Huyen, Yentram
2013-09-01
The era of "big data" presents immense opportunities for scientific discovery and technological progress, with the potential to have enormous impact on research and development in the public sector. In order to capitalize on these benefits, there are significant challenges to overcome in data analytics. The National Institute of Allergy and Infectious Diseases held a symposium entitled "Data Science: Unlocking the Power of Big Data" to create a forum for big data experts to present and share some of the creative and innovative methods to gleaning valuable knowledge from an overwhelming flood of biological data. A significant investment in infrastructure and tool development, along with more and better-trained data scientists, may facilitate methods for assimilation of data and machine learning, to overcome obstacles such as data security, data cleaning, and data integration.
Unlocking the Power of Big Data at the National Institutes of Health
Coakley, Meghan F.; Leerkes, Maarten R.; Barnett, Jason; Gabrielian, Andrei E.; Noble, Karlynn; Weber, M. Nick
2013-01-01
Abstract The era of “big data” presents immense opportunities for scientific discovery and technological progress, with the potential to have enormous impact on research and development in the public sector. In order to capitalize on these benefits, there are significant challenges to overcome in data analytics. The National Institute of Allergy and Infectious Diseases held a symposium entitled “Data Science: Unlocking the Power of Big Data” to create a forum for big data experts to present and share some of the creative and innovative methods to gleaning valuable knowledge from an overwhelming flood of biological data. A significant investment in infrastructure and tool development, along with more and better-trained data scientists, may facilitate methods for assimilation of data and machine learning, to overcome obstacles such as data security, data cleaning, and data integration. PMID:27442200
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia
Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nailsmore » were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic-induced skin lesions show elevated plasma Big ET-1 levels. ► Arsenic-induced hypertension and skin lesions may be linked to plasma Big ET-1 levels.« less
Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.
Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian
2017-09-27
Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.
Learning Semantic Tags from Big Data for Clinical Text Representation.
Li, Yanpeng; Liu, Hongfang
2015-01-01
In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.
Process Pharmacology: A Pharmacological Data Science Approach to Drug Development and Therapy.
Lötsch, Jörn; Ultsch, Alfred
2016-04-01
A novel functional-genomics based concept of pharmacology that uses artificial intelligence techniques for mining and knowledge discovery in "big data" providing comprehensive information about the drugs' targets and their functional genomics is proposed. In "process pharmacology", drugs are associated with biological processes. This puts the disease, regarded as alterations in the activity in one or several cellular processes, in the focus of drug therapy. In this setting, the molecular drug targets are merely intermediates. The identification of drugs for therapeutic or repurposing is based on similarities in the high-dimensional space of the biological processes that a drug influences. Applying this principle to data associated with lymphoblastic leukemia identified a short list of candidate drugs, including one that was recently proposed as novel rescue medication for lymphocytic leukemia. The pharmacological data science approach provides successful selections of drug candidates within development and repurposing tasks. © 2016 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
ERIC Educational Resources Information Center
Shintani, Natsuko; Ellis, Rod
2014-01-01
The purpose of this article is to examine both the process and product of vocabulary learning in a task-based instructional context. The article reports a study that investigated the acquisition of two dimensional adjectives ("big" and "small") by six-year-old Japanese children who were complete beginners. It tracked the…
ERIC Educational Resources Information Center
Ainley, Janet; Gould, Robert; Pratt, Dave
2015-01-01
This paper is in the form of a reflective discussion of the collection of papers in this Special Issue on "Statistical reasoning: learning to reason from samples" drawing on deliberations arising at the Seventh International Collaboration for Research on Statistical Reasoning, Thinking, and Literacy (SRTL7). It is an important part of…
ERIC Educational Resources Information Center
Wimmer, Heinz; Goswami, Usha
1994-01-01
Groups of seven- to nine-year olds learning to read in English and German were given three types of reading tasks. Whereas reading time and error rates in numeral and number word reading were very similar across the two orthographies, the German children showed a big advantage in reading the nonsense words, suggesting adoption of different…
E-Communications 101: Here Is Your Guide to Efficient Communication in an Electronic Age
ERIC Educational Resources Information Center
Solomon, Gwen
2004-01-01
More tasks than ever are heading online these days--from student projects and field trips to virtual schools and electronic professional development. The big idea is that technology saves time and effort, focuses people quickly and easily, and commands attention in a world of too many demands, distractions, and delivery systems. So what are the…
Work for Play: Careers in Video Game Development
ERIC Educational Resources Information Center
Liming, Drew; Vilorio, Dennis
2011-01-01
Video games are not only for play; they also provide work. Making video games is a serious--and big--business. Creating these games is complex and requires the collaboration of many developers, who perform a variety of tasks, from production to programming. They work for both small and large game studios to create games that can be played on many…
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Seeing the big picture: Broadening attention relieves sadness and depressed mood.
Gu, Li; Yang, Xueling; Li, Liman Man Wai; Zhou, Xinyue; Gao, Ding-Guo
2017-08-01
We examined whether the broadened attentional scope would affect people's sad or depressed mood with two experiments, enlightened by the meaning of "seeing the big picture" and the broaden-and-build model. Experiment 1 (n = 164) is a laboratory-based experiment, in which we manipulated the attentional scope by showing participants zoomed-out or zoomed-in scenes. In Experiment 2 (n = 44), we studied how depressed mood and positive and negative emotions were affected when participants watched distant versus proximal scenes for eight weeks in real life. Healthy participants in Experiment 1, who were induced to feel sad, could return to the baseline mood after having the broadened attention task but not after having the narrowed attention task, which indicated that immediate attention broadening manipulation could function as antidotes for the lingering effects of induced negative emotions. Participants with depressed mood in Experiment 2 showed reduced depressed mood, increased positive affect, and decreased negative affect after receiving attention broadening training compared to those receiving attention narrowing training. Our findings suggest a robust role of broadened attentional scope in relieving negative emotions and even mildly depressed mood in the long run. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Big Data Analytics in Healthcare
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Beard, Daniel A.
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957
The dominance of big pharma: power.
Edgar, Andrew
2013-05-01
The purpose of this paper is to provide a normative model for the assessment of the exercise of power by Big Pharma. By drawing on the work of Steven Lukes, it will be argued that while Big Pharma is overtly highly regulated, so that its power is indeed restricted in the interests of patients and the general public, the industry is still able to exercise what Lukes describes as a third dimension of power. This entails concealing the conflicts of interest and grievances that Big Pharma may have with the health care system, physicians and patients, crucially through rhetorical engagements with Patient Advocacy Groups that seek to shape public opinion, and also by marginalising certain groups, excluding them from debates over health care resource allocation. Three issues will be examined: the construction of a conception of the patient as expert patient or consumer; the phenomenon of disease mongering; the suppression or distortion of debates over resource allocation.
Big Data Analytics in Healthcare.
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.
TTSA: An Effective Scheduling Approach for Delay Bounded Tasks in Hybrid Clouds.
Yuan, Haitao; Bi, Jing; Tan, Wei; Zhou, MengChu; Li, Bo Hu; Li, Jianqiang
2017-11-01
The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
Semantic size of abstract concepts: it gets emotional when you can't see it.
Yao, Bo; Vasiljevic, Milica; Weick, Mario; Sereno, Margaret E; O'Donnell, Patrick J; Sereno, Sara C
2013-01-01
Size is an important visuo-spatial characteristic of the physical world. In language processing, previous research has demonstrated a processing advantage for words denoting semantically "big" (e.g., jungle) versus "small" (e.g., needle) concrete objects. We investigated whether semantic size plays a role in the recognition of words expressing abstract concepts (e.g., truth). Semantically "big" and "small" concrete and abstract words were presented in a lexical decision task. Responses to "big" words, regardless of their concreteness, were faster than those to "small" words. Critically, we explored the relationship between semantic size and affective characteristics of words as well as their influence on lexical access. Although a word's semantic size was correlated with its emotional arousal, the temporal locus of arousal effects may depend on the level of concreteness. That is, arousal seemed to have an earlier (lexical) effect on abstract words, but a later (post-lexical) effect on concrete words. Our findings provide novel insights into the semantic representations of size in abstract concepts and highlight that affective attributes of words may not always index lexical access.
Big Data in HEP: A comprehensive use case study
NASA Astrophysics Data System (ADS)
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter; Jayatilaka, Bo; Kowalkowski, Jim; Pivarski, Jim; Sehrish, Saba; Mantilla Surez, Cristina; Svyatkovskiy, Alexey; Tran, Nhan
2017-10-01
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.
Big Data breaking barriers - first steps on a long trail
NASA Astrophysics Data System (ADS)
Schade, S.
2015-04-01
Most data sets and streams have a geospatial component. Some people even claim that about 80% of all data is related to location. In the era of Big Data this number might even be underestimated, as data sets interrelate and initially non-spatial data becomes indirectly geo-referenced. The optimal treatment of Big Data thus requires advanced methods and technologies for handling the geospatial aspects in data storage, processing, pattern recognition, prediction, visualisation and exploration. On the one hand, our work exploits earth and environmental sciences for existing interoperability standards, and the foundational data structures, algorithms and software that are required to meet these geospatial information handling tasks. On the other hand, we are concerned with the arising needs to combine human analysis capacities (intelligence augmentation) with machine power (artificial intelligence). This paper provides an overview of the emerging landscape and outlines our (Digital Earth) vision for addressing the upcoming issues. We particularly request the projection and re-use of the existing environmental, earth observation and remote sensing expertise in other sectors, i.e. to break the barriers of all of these silos by investigating integrated applications.
Emerging health issues: the widening challenge for population health promotion.
McMichael, Anthony J; Butler, Colin D
2006-12-01
The spectrum of tasks for health promotion has widened since the Ottawa Charter was signed. In 1986, infectious diseases still seemed in retreat, the potential extent of HIV/AIDS was unrecognized, the Green Revolution was at its height and global poverty appeared less intractable. Global climate change had not yet emerged as a major threat to development and health. Most economists forecast continuous improvement, and chronic diseases were broadly anticipated as the next major health issue. Today, although many broadly averaged measures of population health have improved, many of the determinants of global health have faltered. Many infectious diseases have emerged; others have unexpectedly reappeared. Reasons include urban crowding, environmental changes, altered sexual relations, intensified food production and increased mobility and trade. Foremost, however, is the persistence of poverty and the exacerbation of regional and global inequality. Life expectancy has unexpectedly declined in several countries. Rather than being a faint echo from an earlier time of hardship, these declines could signify the future. Relatedly, the demographic and epidemiological transitions have faltered. In some regions, declining fertility has overshot that needed for optimal age structure, whereas elsewhere mortality increases have reduced population growth rates, despite continuing high fertility. Few, if any, Millennium Development Goals (MDG), including those for health and sustainability, seem achievable. Policy-makers generally misunderstand the link between environmental sustainability (MDG #7) and health. Many health workers also fail to realize that social cohesion and sustainability--maintenance of the Earth's ecological and geophysical systems--is a necessary basis for health. In sum, these issues present an enormous challenge to health. Health promotion must address population health influences that transcend national boundaries and generations and engage with the development, human rights and environmental movements. The big task is to promote sustainable environmental and social conditions that bring enduring and equitable health gains.
Social customer relationship management: taking advantage of Web 2.0 and Big Data technologies.
Orenga-Roglá, Sergio; Chalmeta, Ricardo
2016-01-01
The emergence of Web 2.0 and Big Data technologies has allowed a new customer relationship strategy based on interactivity and collaboration called Social Customer Relationship Management (Social CRM) to be created. This enhances customer engagement and satisfaction. The implementation of Social CRM is a complex task that involves different organisational, human and technological aspects. However, there is a lack of methodologies to assist companies in these processes. This paper shows a novel methodology that helps companies to implement Social CRM, taking into account different aspects such as social customer strategy, the Social CRM performance measurement system, the Social CRM business processes, or the Social CRM computer system. The methodology was applied to one company in order to validate and refine it.
CoLiTec software - detection of the near-zero apparent motion
NASA Astrophysics Data System (ADS)
Khlamov, Sergii V.; Savanevych, Vadym E.; Briukhovetskyi, Olexandr B.; Pohorelov, Artem V.
2017-06-01
In this article we described CoLiTec software for full automated frames processing. CoLiTec software allows processing the Big Data of observation results as well as processing of data that is continuously formed during observation. The scope of solving tasks includes frames brightness equalization, moving objects detection, astrometry, photometry, etc. Along with the high efficiency of Big Data processing CoLiTec software also ensures high accuracy of data measurements. A comparative analysis of the functional characteristics and positional accuracy was performed between CoLiTec and Astrometrica software. The benefits of CoLiTec used with wide field and low quality frames were observed. The efficiency of the CoLiTec software was proved by about 700.000 observations and over 1.500 preliminary discoveries.
BigBOSS: The Ground-Based Stage IV BAO Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlegel, David; Bebek, Chris; Heetderks, Henry
2009-04-01
The BigBOSS experiment is a proposed DOE-NSF Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with an all-sky galaxy redshift survey. The project is designed to unlock the mystery of dark energy using existing ground-based facilities operated by NOAO. A new 4000-fiber R=5000 spectrograph covering a 3-degree diameter field will measure BAO and redshift space distortions in the distribution of galaxies and hydrogen gas spanning redshifts from 0.2< z< 3.5. The Dark Energy Task Force figure of merit (DETF FoM) for this experiment is expected to be equal to that ofmore » a JDEM mission for BAO with the lower risk and cost typical of a ground-based experiment.« less
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
Assistance by NASA to EPA in the establishment and maintenance of a fully operational energy-related monitoring system included: (1) regional analysis applications based on LANDSAT and auxiliary data; (2) development of techniques for using aircraft MSS data to rapidly monitor site specific surface coal mine activities; and (3) registration of aircraft MSS data to a map base. The coal strip mines used in the site specific task were in Campbell County, Wyoming; Big Horn County, Montana; and the Navajo mine in San Juan County, New Mexico. The procedures and software used to accomplish these tasks are described.
SparkText: Biomedical Text Mining on Big Data Framework.
Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M
Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.
Mahroum, Naim; Bragazzi, Nicola Luigi; Sharif, Kassem; Gianfredi, Vincenza; Nucci, Daniele; Rosselli, Roberto; Brigo, Francesco; Adawi, Mohammad; Amital, Howard; Watad, Abdulla
2018-06-01
Technological advancements, such as patient-centered smartphone applications, have enabled to support self-management of the disease. Further, the accessibility to health information through the Internet has grown tremendously. This article aimed to investigate how big data can be useful to assess the impact of a celebrity's rheumatic disease on the public opinion. Variable tools and statistical/computational approaches have been used, including massive data mining of Google Trends, Wikipedia, Twitter, and big data analytics. These tools were mined using an in-house script, which facilitated the process of data collection, parsing, handling, processing, and normalization. From Google Trends, the temporal correlation between "Anna Marchesini" and rheumatoid arthritis (RA) queries resulted 0.66 before Anna Marchesini's death and 0.90 after Anna Marchesini's death. The geospatial correlation between "Anna Marchesini" and RA queries resulted 0.45 before Anna Marchesini's death and 0.52 after Anna Marchesini's death. From Wikitrends, after Anna Marchesini's death, the number of accesses to Wikipedia page for RA has increased 5770%. From Twitter, 1979 tweets have been retrieved. Numbers of likes, retweets, and hashtags have increased throughout time. Novel data streams and big data analytics are effective to assess the impact of a disease in a famous person on the laypeople.
SparkText: Biomedical Text Mining on Big Data Framework
He, Karen Y.; Wang, Kai
2016-01-01
Background Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. Results In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. Conclusions This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research. PMID:27685652
Chartier, Gabrielle; Cawthorpe, David
2016-09-01
This study outlines the rationale and provides evidence in support of including psychiatric disorders in the World Health Organization's classification of preventable diseases. The methods used represent a novel approach to describe clinical pathways, highlighting the importance of considering the full range of comorbid disorders within an integrated population-based data repository. Review of literature focused on comorbidity in relation to the four preventable diseases identified by the World Health Organization. This revealed that only 29 publications over the last 5 years focus on populations and tend only to consider one or two comorbid disorders simultaneously in regard to any main preventable disease class. This article draws attention to the importance of physical and psychiatric comorbidity and illustrates the complexity related to describing clinical pathways in terms of understanding the etiological and prognostic clinical profile for patients. Developing a consistent and standardized approach to describe these features of disease has the potential to dramatically shift the format of both clinical practice and medical education when taking into account the complex relationships between and among diseases, such as psychiatric and physical disease, that, hitherto, have been largely unrelated in research.
2017-01-01
According to previous survey, about two million of people were expected to suffer from toxic effects due to humidifier disinfectant (HD), regardless of healing or not. Extremely small group are recognized as HDs’ victims. Up to now, previous research tried to focus on interstitial fibrosis on terminal bronchiole because it is specific finding, compared with other diseases. To figure out overall effects from HDs, we recommend adverse outcome pathways (AOPs) as new approach. Reactive oxygen species (ROS) generation, decreased T-cell and pro-inflammatory cytokine release from macrophage could be key events between the exposure to HDs and diseases. ROS generation, decreased cell and pro-inflammatory cytokine release from macrophage could be cause of interstitial fibrosis, pneumonia and many other diseases such as asthma, allergic rhinitis, allergic dermatitis, fetal death, premature baby, autoimmune disease, hepatic toxicity, renal toxicity, cancer, and so on. We predict potential disease candidate by AOPs. We can validate the real risk of the adverse outcome by epidemiologic and toxicologic study using big data such as National Health Insurance data and AOPs knowledge base. Application of these kinds of new methods can find the potential disease list from the exposure to HD. PMID:28111421
Leem, Jong-Han; Chung, Kyu Hyuck
2016-01-01
According to previous survey, about two million of people were expected to suffer from toxic effects due to humidifier disinfectant (HD), regardless of healing or not. Extremely small group are recognized as HDs' victims. Up to now, previous research tried to focus on interstitial fibrosis on terminal bronchiole because it is specific finding, compared with other diseases. To figure out overall effects from HDs, we recommend adverse outcome pathways (AOPs) as new approach. Reactive oxygen species (ROS) generation, decreased T-cell and pro-inflammatory cytokine release from macrophage could be key events between the exposure to HDs and diseases. ROS generation, decreased cell and pro-inflammatory cytokine release from macrophage could be cause of interstitial fibrosis, pneumonia and many other diseases such as asthma, allergic rhinitis, allergic dermatitis, fetal death, premature baby, autoimmune disease, hepatic toxicity, renal toxicity, cancer, and so on. We predict potential disease candidate by AOPs. We can validate the real risk of the adverse outcome by epidemiologic and toxicologic study using big data such as National Health Insurance data and AOPs knowledge base. Application of these kinds of new methods can find the potential disease list from the exposure to HD.
NASA Astrophysics Data System (ADS)
Andrés, J.; Gracia, L.; Tornero, J.; García, J. A.; González, F.
2009-11-01
The implementation of a postprocessor for the NX™ platform (Siemens Corp.) is described in this paper. It is focused on a milling redundant robotic milling workcell consisting of one KUKA KR 15/2 manipulator (6 rotary joints, KRC2 controller) mounted on a linear axis and synchronized with a rotary table (i.e., two additional joints). For carrying out a milling task, a choice among a set of possible configurations is required, taking into account the ability to avoid singular configurations by using both additional joints. Usually, experience and knowledge of the workman allow an efficient control in these cases, but being it a tedious job. Similarly to this expert knowledge, a stand-alone fuzzy controller has been programmed with Matlab's Fuzzy Logic Toolbox (The MathWorks, Inc.). Two C++ programs complement the translation of the toolpath tracking (expressed in the Cartesian space) from the NX™-CAM module into KRL (KUKA Robot Language). In order to avoid singularities or joint limits, the location of the robot and the workpiece during the execution of the task is fit after an inverse kinematics position analysis and a fuzzy inference (i.e., fuzzy criterion in the Joint Space). Additionally, the applicability of robot arms for the manufacture of big volume prototypes with this technique is proven by means of one case studied. It consists of a big orographic model to simulate floodways, return flows and retention storage of a reservoir in the Mijares river (Puebla de Arenoso, Spain). This article deals with the problem for a constant tool orientation milling process and sets the technological basis for future research at five axis milling operations.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Significance of Social Applications on a Mobile Phone for English Task-Based Language Learning
ERIC Educational Resources Information Center
Ahmad, Anmol; Farrukh, Fizza
2015-01-01
The utter importance of knowing the English language cannot be denied today. Despite the existence of traditional methods for teaching a language in schools, a big number of children are left without the requisite knowledge of English as a result of which they fail to compete in the modern world. With English being a Lingua Franca, more efforts…
ERIC Educational Resources Information Center
Weil, Marty
2008-01-01
This article provides CIOs suggested training, development, and administrative tasks to prepare for the technology infrastructure challenges that lie ahead in the upcoming school year. These suggestions cover a range of subject matter from personal/professional development to data audits, social networking, acceptable risk policies, and data…
Big data for bipolar disorder.
Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael
2016-12-01
The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.
[Overall digitalization: leading innovation of endodontics in big data era].
Ling, J Q
2016-04-09
In big data era, digital technologies bring great challenges and opportunities to modern stomatology. The applications of digital technologies, such as cone-beam CT(CBCT), computer aided design,(CAD)and computer aided manufacture(CAM), 3D printing and digital approaches for education , provide new concepts and patterns to the treatment and study of endodontic diseases. This review provides an overview of the application and prospect of commonly used digital technologies in the development of endodontics.
Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.
Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R
2009-04-03
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.
Developing public policy to advance the use of big data in health care.
Heitmueller, Axel; Henderson, Sarah; Warburton, Will; Elmagarmid, Ahmed; Pentland, Alex Sandy; Darzi, Ara
2014-09-01
The vast amount of health data generated and stored around the world each day offers significant opportunities for advances such as the real-time tracking of diseases, predicting disease outbreaks, and developing health care that is truly personalized. However, capturing, analyzing, and sharing health data is difficult, expensive, and controversial. This article explores four central questions that policy makers should consider when developing public policy for the use of "big data" in health care. We discuss what aspects of big data are most relevant for health care and present a taxonomy of data types and levels of access. We suggest that successful policies require clear objectives and provide examples, discuss barriers to achieving policy objectives based on a recent policy experiment in the United Kingdom, and propose levers that policy makers should consider using to advance data sharing. We argue that the case for data sharing can be won only by providing real-life examples of the ways in which it can improve health care. Project HOPE—The People-to-People Health Foundation, Inc.
Foulquier, Nathan; Redou, Pascal; Le Gal, Christophe; Rouvière, Bénédicte; Pers, Jacques-Olivier; Saraux, Alain
2018-05-17
Big data analysis has become a common way to extract information from complex and large datasets among most scientific domains. This approach is now used to study large cohorts of patients in medicine. This work is a review of publications that have used artificial intelligence and advanced machine learning techniques to study physio pathogenesis-based treatments in pSS. A systematic literature review retrieved all articles reporting on the use of advanced statistical analysis applied to the study of systemic autoimmune diseases (SADs) over the last decade. An automatic bibliography screening method has been developed to perform this task. The program called BIBOT was designed to fetch and analyze articles from the pubmed database using a list of keywords and Natural Language Processing approaches. The evolution of trends in statistical approaches, sizes of cohorts and number of publications over this period were also computed in the process. In all, 44077 abstracts were screened and 1017 publications were analyzed. The mean number of selected articles was 101.0 (S.D. 19.16) by year, but increased significantly over the time (from 74 articles in 2008 to 138 in 2017). Among them only 12 focused on pSS but none of them emphasized on the aspect of pathogenesis-based treatments. To conclude, medicine progressively enters the era of big data analysis and artificial intelligence, but these approaches are not yet used to describe pSS-specific pathogenesis-based treatment. Nevertheless, large multicentre studies are investigating this aspect with advanced algorithmic tools on large cohorts of SADs patients.
Hib Disease (Haemophilus Influenzae Type b)
... b. It's a type of bacteria that can cause a number of different illnesses: Hib infection might lead people to develop anything from skin infections to more serious problems like blood infections or meningitis. Hib disease usually isn't a big worry for healthy teens. But it can be ...
Museums as a venue for public health intervention.
Ickovics, Jeannette R
2013-12-01
Big Food: Health, Culture, and the Evolution of Eating broke numerous records for museum attendance, highlighting the public's appetite for public health. During its 10-month run at the Yale Peabody Museum of Natural History, more than 120 000 visitors attended Big Food, including 25 000 students through the museum's public education program, an increase of 30% more than the average student attendance in the past decade. Big Food cost approximately $100 000 to build, comprising printed panels and objects, installation displays (e.g., custom-built cases to house such objects as sugar-sweetened beverages and healthy and diseased organs), temporary walls, video monitors, food products, and more. At less than $1 per visitor, this provided extraordinary public health value.
Population-based imaging biobanks as source of big data.
Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian
2017-06-01
Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.
Museums as a Venue for Public Health Intervention
2013-01-01
Big Food: Health, Culture, and the Evolution of Eating broke numerous records for museum attendance, highlighting the public’s appetite for public health. During its 10-month run at the Yale Peabody Museum of Natural History, more than 120 000 visitors attended Big Food, including 25 000 students through the museum’s public education program, an increase of 30% more than the average student attendance in the past decade. Big Food cost approximately $100 000 to build, comprising printed panels and objects, installation displays (e.g., custom-built cases to house such objects as sugar-sweetened beverages and healthy and diseased organs), temporary walls, video monitors, food products, and more. At less than $1 per visitor, this provided extraordinary public health value. PMID:24134356
Lauri, Andrea; Castiglioni, Bianca; Mariani, Paola
2011-07-01
Salmonella is a major cause of food-borne disease, and Salmonella enterica subspecies I includes the most clinically relevant serotypes. Salmonella serotype determination is important for the disease etiology assessment and contamination source tracking. This task will be facilitated by the disclosure of Salmonella serotype sequence polymorphisms, here annotated in seven genes (sefA, safA, safC, bigA, invA, fimA, and phsB) from 139 S. enterica strains, of which 109 belonging to 44 serotypes of subsp. I. One hundred nineteen polymorphic sites were scored and associated to single serotypes or to serotype groups belonging to S. enterica subsp. I. A diagnostic tool was constructed based on the Ligation Detection Reaction-Universal Array (LDR-UA) for the detection of polymorphic sites uniquely associated to serotypes of primary interest (Salmonella Hadar, Salmonella Infantis, Salmonella Enteritidis, Salmonella Typhimurium, Salmonella Gallinarum, Salmonella Virchow, and Salmonella Paratyphi B). The implementation of promiscuous probes allowed the diagnosis of ten further serotypes that could be associated to a unique hybridization pattern. Finally, the sensitivity and applicability of the tool was tested on target DNA dilutions and with controlled meat contamination, allowing the detection of one Salmonella CFU in 25 g of meat.
Bagshaw, Sean M; Goldstein, Stuart L; Ronco, Claudio; Kellum, John A
2016-01-01
The world is immersed in "big data". Big data has brought about radical innovations in the methods used to capture, transfer, store and analyze the vast quantities of data generated every minute of every day. At the same time; however, it has also become far easier and relatively inexpensive to do so. Rapidly transforming, integrating and applying this large volume and variety of data are what underlie the future of big data. The application of big data and predictive analytics in healthcare holds great promise to drive innovation, reduce cost and improve patient outcomes, health services operations and value. Acute kidney injury (AKI) may be an ideal syndrome from which various dimensions and applications built within the context of big data may influence the structure of services delivery, care processes and outcomes for patients. The use of innovative forms of "information technology" was originally identified by the Acute Dialysis Quality Initiative (ADQI) in 2002 as a core concept in need of attention to improve the care and outcomes for patients with AKI. For this 15(th) ADQI consensus meeting held on September 6-8, 2015 in Banff, Canada, five topics focused on AKI and acute renal replacement therapy were developed where extensive applications for use of big data were recognized and/or foreseen. In this series of articles in the Canadian Journal of Kidney Health and Disease, we describe the output from these discussions.
Umemoto, A; Holroyd, C B
2016-01-01
Anterior cingulate cortex (ACC) is involved in cognitive control and decision-making but its precise function is still highly debated. Based on evidence from lesion, neurophysiological, and neuroimaging studies, we have recently proposed a critical role for ACC in motivating extended behaviors according to learned task values (Holroyd and Yeung, 2012). Computational simulations based on this theory suggest a hierarchical mechanism in which a caudal division of ACC selects and applies control over task execution, and a rostral division of ACC facilitates switches between tasks according to a higher task strategy (Holroyd and McClure, 2015). This theoretical framework suggests that ACC may contribute to personality traits related to persistence and reward sensitivity (Holroyd and Umemoto, 2016). To explore this possibility, we carried out a voluntary task switching experiment in which on each trial participants freely chose one of two tasks to perform, under the condition that they try to select the tasks "at random" and equally often. The participants also completed several questionnaires that assessed personality trait related to persistence, apathy, anhedonia, and rumination, in addition to the Big 5 personality inventory. Among other findings, we observed greater compliance with task instructions by persistent individuals, as manifested by a greater facility with switching between tasks, which is suggestive of increased engagement of rostral ACC. © 2016 Elsevier B.V. All rights reserved.
John T. Kliejunas; William J. Otrosina; James R. Allison
2005-01-01
Six annosus (Heterobasidion annosum) root disease centers in a proposed campground on the north shore of Big Bear Lake in southern California were treated in 1989. Trees, stumps, and roots were removed in six disease centers, and in two cases, soil trenching was used to stop the progress of the disease. A total of 154 trees and 26 stumps were removed...
USDA-ARS?s Scientific Manuscript database
Background - Cardiovascular disease and type-2-diabetes represent overlapping diseases where a large portion of the variation attributable to genetics remains unexplained. An important player in their etiology is Peroxisome Proliferator-activated Receptor gamma (PPARy) that is involved in lipid and ...
78 FR 27969 - Meeting of the Community Preventive Services Task Force (Task Force)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
... discussed: Matters to be discussed: cancer prevention and control, cardiovascular disease prevention and... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the Community Preventive Services Task Force (Task Force) AGENCY: Centers for Disease Control and Prevention...
77 FR 56845 - Meeting of the Community Preventive Services Task Force (Task Force)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-14
...: Matters to be discussed: Tobacco, oral health and cardiovascular disease. Meeting Accessibility: This... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the Community Preventive Services Task Force (Task Force) AGENCY: Centers for Disease Control and Prevention...
78 FR 59939 - Meeting of the Community Preventive Services Task Force (Task Force)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
.... Matters to be discussed: Cancer prevention and control, cardiovascular disease prevention and control... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the Community Preventive Services Task Force (Task Force) AGENCY: Centers for Disease Control and Prevention...
MyGeneFriends: A Social Network Linking Genes, Genetic Diseases, and Researchers
Allot, Alexis; Chennen, Kirsley; Nevers, Yannis; Poidevin, Laetitia; Kress, Arnaud; Ripp, Raymond; Thompson, Julie Dawn; Poch, Olivier
2017-01-01
Background The constant and massive increase of biological data offers unprecedented opportunities to decipher the function and evolution of genes and their roles in human diseases. However, the multiplicity of sources and flow of data mean that efficient access to useful information and knowledge production has become a major challenge. This challenge can be addressed by taking inspiration from Web 2.0 and particularly social networks, which are at the forefront of big data exploration and human-data interaction. Objective MyGeneFriends is a Web platform inspired by social networks, devoted to genetic disease analysis, and organized around three types of proactive agents: genes, humans, and genetic diseases. The aim of this study was to improve exploration and exploitation of biological, postgenomic era big data. Methods MyGeneFriends leverages conventions popularized by top social networks (Facebook, LinkedIn, etc), such as networks of friends, profile pages, friendship recommendations, affinity scores, news feeds, content recommendation, and data visualization. Results MyGeneFriends provides simple and intuitive interactions with data through evaluation and visualization of connections (friendships) between genes, humans, and diseases. The platform suggests new friends and publications and allows agents to follow the activity of their friends. It dynamically personalizes information depending on the user’s specific interests and provides an efficient way to share information with collaborators. Furthermore, the user’s behavior itself generates new information that constitutes an added value integrated in the network, which can be used to discover new connections between biological agents. Conclusions We have developed MyGeneFriends, a Web platform leveraging conventions from popular social networks to redefine the relationship between humans and biological big data and improve human processing of biomedical data. MyGeneFriends is available at lbgi.fr/mygenefriends. PMID:28623182
Recommendations for Secure Initialization Routines in Operating Systems
2004-12-01
monolithic design is used. This term is often used to distinguish the operating system from supporting software, e.g. “The Linux kernel does not specify...give the operating system structure and organization. Yet the overall monolithic design of the kernel still falls under Tannenbaum and Woodhull’s “Big...modules that handle initialization tasks. Any further subdivision would complicate interdependencies that are a result of having a monolithic kernel
2016-03-01
53 Defense AT&L: March-April 2016 Whipping Procrastination Roy Wood, Ph.D. Wood is the acting Vice President of the Defense Acquisition University...written this article earlier, except I was procrastinating . This happens to me a lot, which is surprising since many consider me to be fairly productive. I... procrastinating . Chunk the work: I break big tasks down into smaller ones that are not quite so intimidating. Some people get really so- phisticated and use
Geologic Map of the Big Spring Quadrangle, Carter County, Missouri
Weary, David J.; McDowell, Robert C.
2006-01-01
The bedrock exposed in the Big Spring quadrangle of Missouri comprises Late Cambrian and Early Ordovician aged dolomite, sandstone, and chert. The sedimentary rocks are nearly flat lying except where they are adjacent to faults. The carbonate rocks are karstified, and the area contains numerous sinkholes, springs, caves, and losing streams. This map is one of several being produced under the U.S. Geological Survey (USGS) National Cooperative Geologic Mapping Program to provide geologic data applicable to land-use problems in the Ozarks of south-central Missouri. Ongoing and potential industrial and agricultural development in the Ozarks region has presented issues of ground-water quality in karst areas. A national park in this region (Ozark National Scenic Riverways, Missouri) is concerned about the effects of activities in areas outside of their stewardship on the water resources that define the heart of this park. This task applies geologic mapping and karst investigations to address issues surrounding competing land use in south-central Missouri. This task keeps geologists from the USGS associated with the park and allows the park to utilize USGS expertise and aid the NPS on how to effectively use geologic maps for park management. For more information, see: http://geology.er.usgs.gov/eespteam/Karst/index.html
Cook, Hadrian; Inman, Alax
2012-12-15
The voluntary sector is value driven, issue focussed and considered economically efficient due to volunteer engagement and low administrative overheads in meeting conservation objectives. Independence and flexibility make it an intermediary between stakeholders and government and it is proving an effective vehicle for public engagement. NGOs are emerging as a key player in environmental action, making them a partial replacement for 'big government action' and may be heralding a 'Big Green Society'. The sector ranges in scale from small, local conservation charities to nationally important organisations. This article focuses on functionality because resource issues relate to funding, competences of personnel, continuity of mission and access to expertise, and all are affected during times of austerity. NGOs were largely task-oriented, yet they rapidly developed a campaigning role encapsulating an ever deeper role in both planning and policy formulation. Subsequently, they have developed community inclusion at the core of their function. While the portents remain good, potential problems relate to economic resources, task allocation, impacts on labour markets, interactions with the statutory sector, operational independence and to relations with local democracy. Outlined in this paper are historic functions, operation and development of the sector and perceived issues for the future. Copyright © 2012 Elsevier Ltd. All rights reserved.
[The real place of infectious pathology in overall population morbidity].
Sergiev, V P; Drynov, I D; Malyshev, N A
1999-01-01
The statistical decrease of the proportion of infections in the structure of morbidity of the population reflects the existing classification of diseases when only acute diseases are classified with the group "infectious and parasitic diseases". The proportion of diseases caused by infective agents remains constantly high. According to WHO data, such diseases make up one-third of all diseases in the world. In Moscow the proportion of infectious diseases in all diseases registered among the inhabitants of this big city fluctuated within 36.1% and 49.7% during the period of 1926-1997.
A multi-ontology approach to annotate scientific documents based on a modularization technique.
Gomes, Priscilla Corrêa E Castro; Moura, Ana Maria de Carvalho; Cavalcanti, Maria Cláudia
2015-12-01
Scientific text annotation has become an important task for biomedical scientists. Nowadays, there is an increasing need for the development of intelligent systems to support new scientific findings. Public databases available on the Web provide useful data, but much more useful information is only accessible in scientific texts. Text annotation may help as it relies on the use of ontologies to maintain annotations based on a uniform vocabulary. However, it is difficult to use an ontology, especially those that cover a large domain. In addition, since scientific texts explore multiple domains, which are covered by distinct ontologies, it becomes even more difficult to deal with such task. Moreover, there are dozens of ontologies in the biomedical area, and they are usually big in terms of the number of concepts. It is in this context that ontology modularization can be useful. This work presents an approach to annotate scientific documents using modules of different ontologies, which are built according to a module extraction technique. The main idea is to analyze a set of single-ontology annotations on a text to find out the user interests. Based on these annotations a set of modules are extracted from a set of distinct ontologies, and are made available for the user, for complementary annotation. The reduced size and focus of the extracted modules tend to facilitate the annotation task. An experiment was conducted to evaluate this approach, with the participation of a bioinformatician specialist of the Laboratory of Peptides and Proteins of the IOC/Fiocruz, who was interested in discovering new drug targets aiming at the combat of tropical diseases. Copyright © 2015 Elsevier Inc. All rights reserved.
78 FR 2996 - Meeting of the Community Preventive Services Task Force (Task Force)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the Community Preventive Services Task Force (Task Force) AGENCY: Centers for Disease Control and Prevention... for Disease Control and Prevention (CDC) announces the next meeting of the Community Preventive...
Pediatric trauma BIG score: Predicting mortality in polytraumatized pediatric patients.
El-Gamasy, Mohamed Abd El-Aziz; Elezz, Ahmed Abd El Basset Abo; Basuni, Ahmed Sobhy Mohamed; Elrazek, Mohamed El Sayed Ali Abd
2016-11-01
Trauma is a worldwide health problem and the major cause of death and disability, particularly affecting the young population. It is important to remember that pediatric trauma care has made a significant improvement in the outcomes of these injured children. This study aimed at evaluation of pediatric trauma BIG score in comparison with New Injury Severity Score (NISS) and Pediatric Trauma Score (PTS) in Tanta University Emergency Hospital. The study was conducted in Tanta University Emergency Hospital to all multiple trauma pediatric patients attended to the Emergency Department for 1 year. Pediatric trauma BIG score, PTS, and NISS scores were calculated and results compared to each other and to observed mortality. BIG score ≥12.7 has sensitivity 86.7% and specificity 71.4%, whereas PTS at value ≤3.5 has sensitivity 63.3% and specificity 68.6% and NISS at value ≥39.5 has sensitivity 53.3% and specificity 54.3%. There was a significant positive correlation between BIG score value and mortality rate. The pediatric BIG score is a reliable mortality-prediction score for children with traumatic injuries; it uses international normalization ratio (INR), Base Excess (BE), and Glasgow Coma Scale (GCS) values that can be measured within a few minutes of sampling, so it can be readily applied in the Pediatric Emergency Department, but it cannot be applied on patients with chronic diseases that affect INR, BE, or GCS.
LITTLE FISH, BIG DATA: ZEBRAFISH AS A MODEL FOR CARDIOVASCULAR AND METABOLIC DISEASE.
Gut, Philipp; Reischauer, Sven; Stainier, Didier Y R; Arnaout, Rima
2017-07-01
The burden of cardiovascular and metabolic diseases worldwide is staggering. The emergence of systems approaches in biology promises new therapies, faster and cheaper diagnostics, and personalized medicine. However, a profound understanding of pathogenic mechanisms at the cellular and molecular levels remains a fundamental requirement for discovery and therapeutics. Animal models of human disease are cornerstones of drug discovery as they allow identification of novel pharmacological targets by linking gene function with pathogenesis. The zebrafish model has been used for decades to study development and pathophysiology. More than ever, the specific strengths of the zebrafish model make it a prime partner in an age of discovery transformed by big-data approaches to genomics and disease. Zebrafish share a largely conserved physiology and anatomy with mammals. They allow a wide range of genetic manipulations, including the latest genome engineering approaches. They can be bred and studied with remarkable speed, enabling a range of large-scale phenotypic screens. Finally, zebrafish demonstrate an impressive regenerative capacity scientists hope to unlock in humans. Here, we provide a comprehensive guide on applications of zebrafish to investigate cardiovascular and metabolic diseases. We delineate advantages and limitations of zebrafish models of human disease and summarize their most significant contributions to understanding disease progression to date. Copyright © 2017 the American Physiological Society.
Big Data in HEP: A comprehensive use case study
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter; ...
2017-11-23
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity.more » In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. Lastly, we will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.« less
Big Data in HEP: A comprehensive use case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity.more » In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. Lastly, we will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.« less
Unraveling the Complexities of Life Sciences Data.
Higdon, Roger; Haynes, Winston; Stanberry, Larissa; Stewart, Elizabeth; Yandl, Gregory; Howard, Chris; Broomall, William; Kolker, Natali; Kolker, Eugene
2013-03-01
The life sciences have entered into the realm of big data and data-enabled science, where data can either empower or overwhelm. These data bring the challenges of the 5 Vs of big data: volume, veracity, velocity, variety, and value. Both independently and through our involvement with DELSA Global (Data-Enabled Life Sciences Alliance, DELSAglobal.org), the Kolker Lab ( kolkerlab.org ) is creating partnerships that identify data challenges and solve community needs. We specialize in solutions to complex biological data challenges, as exemplified by the community resource of MOPED (Model Organism Protein Expression Database, MOPED.proteinspire.org ) and the analysis pipeline of SPIRE (Systematic Protein Investigative Research Environment, PROTEINSPIRE.org ). Our collaborative work extends into the computationally intensive tasks of analysis and visualization of millions of protein sequences through innovative implementations of sequence alignment algorithms and creation of the Protein Sequence Universe tool (PSU). Pushing into the future together with our collaborators, our lab is pursuing integration of multi-omics data and exploration of biological pathways, as well as assigning function to proteins and porting solutions to the cloud. Big data have come to the life sciences; discovering the knowledge in the data will bring breakthroughs and benefits.
Big Data Analytics for Demand Response: Clustering Over Space and Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelmis, Charalampos; Kolte, Jahanvi; Prasanna, Viktor K.
The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. Inmore » this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.« less
Bragazzi, Nicola Luigi; Gianfredi, Vincenza; Villarini, Milena; Rosselli, Roberto; Nasr, Ahmed; Hussein, Amr; Martini, Mariano; Behzadifar, Masoud
2018-01-01
Vaccines are public health interventions aimed at preventing infections-related mortality, morbidity, and disability. While vaccines have been successfully designed for those infectious diseases preventable by preexisting neutralizing specific antibodies, for other communicable diseases, additional immunological mechanisms should be elicited to achieve a full protection. “New vaccines” are particularly urgent in the nowadays society, in which economic growth, globalization, and immigration are leading to the emergence/reemergence of old and new infectious agents at the animal–human interface. Conventional vaccinology (the so-called “vaccinology 1.0”) was officially born in 1796 thanks to the contribution of Edward Jenner. Entering the twenty-first century, vaccinology has shifted from a classical discipline in which serendipity and the Pasteurian principle of the three Is (isolate, inactivate, and inject) played a major role to a science, characterized by a rational design and plan (“vaccinology 3.0”). This shift has been possible thanks to Big Data, characterized by different dimensions, such as high volume, velocity, and variety of data. Big Data sources include new cutting-edge, high-throughput technologies, electronic registries, social media, and social networks, among others. The current mini-review aims at exploring the potential roles as well as pitfalls and challenges of Big Data in shaping the future vaccinology, moving toward a tailored and personalized vaccine design and administration. PMID:29556492
Bragazzi, Nicola Luigi; Gianfredi, Vincenza; Villarini, Milena; Rosselli, Roberto; Nasr, Ahmed; Hussein, Amr; Martini, Mariano; Behzadifar, Masoud
2018-01-01
Vaccines are public health interventions aimed at preventing infections-related mortality, morbidity, and disability. While vaccines have been successfully designed for those infectious diseases preventable by preexisting neutralizing specific antibodies, for other communicable diseases, additional immunological mechanisms should be elicited to achieve a full protection. "New vaccines" are particularly urgent in the nowadays society, in which economic growth, globalization, and immigration are leading to the emergence/reemergence of old and new infectious agents at the animal-human interface. Conventional vaccinology (the so-called "vaccinology 1.0") was officially born in 1796 thanks to the contribution of Edward Jenner. Entering the twenty-first century, vaccinology has shifted from a classical discipline in which serendipity and the Pasteurian principle of the three I s (isolate, inactivate, and inject) played a major role to a science, characterized by a rational design and plan ("vaccinology 3.0"). This shift has been possible thanks to Big Data, characterized by different dimensions, such as high volume, velocity, and variety of data. Big Data sources include new cutting-edge, high-throughput technologies, electronic registries, social media, and social networks, among others. The current mini-review aims at exploring the potential roles as well as pitfalls and challenges of Big Data in shaping the future vaccinology, moving toward a tailored and personalized vaccine design and administration.
Motor-cognitive dual-task deficits in individuals with early-mid stage Huntington disease.
Fritz, Nora E; Hamana, Katy; Kelson, Mark; Rosser, Anne; Busse, Monica; Quinn, Lori
2016-09-01
Huntington disease (HD) results in a range of cognitive and motor impairments that progress throughout the disease stages; however, little research has evaluated specific dual-task abilities in this population, and the degree to which they may be related to functional ability. The purpose of this study was to a) examine simple and complex motor-cognitive dual-task performance in individuals with HD, b) determine relationships between dual-task walking ability and disease-specific measures of motor, cognitive and functional ability, and c) examine the relationship of dual-task measures to falls in individuals with HD. Thirty-two individuals with HD were evaluated for simple and complex dual-task ability using the Walking While Talking Test. Demographics and disease-specific measures of motor, cognitive and functional ability were also obtained. Individuals with HD had impairments in simple and complex dual-task ability. Simple dual-task walking was correlated to disease-specific motor scores as well as cognitive performance, but complex dual-task walking was correlated with total functional capacity, as well as a range of cognitive measures. Number of prospective falls was moderately-strongly correlated to dual-task measures. Our results suggest that individuals with HD have impairments in cognitive-motor dual-task ability that are related to disease progression and specifically functional ability. Dual-task measures appear to evaluate a unique construct in individuals with early to mid-stage HD, and may have value in improving the prediction of falls risk in this population. Copyright © 2016 Elsevier B.V. All rights reserved.
Prevention of mental disorders requires action on adverse childhood experiences.
Jorm, Anthony F; Mulder, Roger T
2018-04-01
The increased availability of treatment has not reduced the prevalence of mental disorders, suggesting a need for a greater emphasis on prevention. With chronic physical diseases, successful prevention efforts have focused on reducing the big risk factors. If this approach is applied to mental disorders, the big risk factors are adverse childhood experiences, which have major effects on most classes of mental disorder across the lifespan. While the evidence base is limited, there is support for a number of interventions to reduce adverse childhood experiences, including an important role for mental health professionals. Taking action on adverse childhood experiences may be our best chance of emulating the success of public health action to prevent chronic physical diseases and thereby reduce the large global burden of mental disorders.
The reproduction of gender norms through downsizing in later life residential relocation.
Addington, Aislinn; Ekerdt, David J
2014-01-01
Using data collected from qualitative interviews in 36 households, this article examines people's use of social relations based on gender to perform tasks associated with residential relocation in later life. Without prompting, our respondents addressed the social relations of gender in the meanings of things, in the persons of gift recipients, and in the persons of actors accomplishing the tasks. They matched gender-typed objects to same-sex recipients, reproducing circumstances of possession and passing on expectations for gender identity. The accounts of our respondents also depicted a gendered division of household labor between husbands and wives and a gendered division of care work by daughters and sons. These strategies economized a big task by shaping decisions about who should get what and who will do what. In turn, these practices affirmed the gendered nature of possession and care work into another generation. © The Author(s) 2012.
Judge, Timothy A; LePine, Jeffery A; Rich, Bruce L
2006-07-01
The authors report results from 2 studies assessing the extent to which narcissism is related to self- and other ratings of leadership, workplace deviance, and task and contextual performance. Study 1 results revealed that narcissism was related to enhanced self-ratings of leadership, even when controlling for the Big Five traits. Study 2 results also revealed that narcissism was related to enhanced leadership self-perceptions; indeed, whereas narcissism was significantly positively correlated with self-ratings of leadership, it was significantly negatively related to other ratings of leadership. Study 2 also revealed that narcissism was related to more favorable self-ratings of workplace deviance and contextual performance compared to other (supervisor) ratings. Finally, as hypothesized, narcissism was more strongly negatively related to contextual performance than to task performance. ((c) 2006 APA, all rights reserved).
Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Muzammil, H
2017-12-29
The growing worldwide population has increased the need for technologies, computerised software algorithms and smart devices that can monitor and assist patients anytime and anywhere and thus enable them to lead independent lives. The real-time remote monitoring of patients is an important issue in telemedicine. In the provision of healthcare services, patient prioritisation poses a significant challenge because of the complex decision-making process it involves when patients are considered 'big data'. To our knowledge, no study has highlighted the link between 'big data' characteristics and real-time remote healthcare monitoring in the patient prioritisation process, as well as the inherent challenges involved. Thus, we present comprehensive insights into the elements of big data characteristics according to the six 'Vs': volume, velocity, variety, veracity, value and variability. Each of these elements is presented and connected to a related part in the study of the connection between patient prioritisation and real-time remote healthcare monitoring systems. Then, we determine the weak points and recommend solutions as potential future work. This study makes the following contributions. (1) The link between big data characteristics and real-time remote healthcare monitoring in the patient prioritisation process is described. (2) The open issues and challenges for big data used in the patient prioritisation process are emphasised. (3) As a recommended solution, decision making using multiple criteria, such as vital signs and chief complaints, is utilised to prioritise the big data of patients with chronic diseases on the basis of the most urgent cases.
USDA-ARS?s Scientific Manuscript database
Background Cardiovascular disease and type 2 diabetes mellitus represent overlapping diseases where a large portion of the variation attributable to genetics remains unexplained. An important player in their pathogenesis is peroxisome proliferator–activated receptor gamma (PPARgamma) that is involve...
Echolocation behavior in big brown bats is not impaired after intense broadband noise exposures.
Hom, Kelsey N; Linnenschmidt, Meike; Simmons, James A; Simmons, Andrea Megela
2016-10-15
Echolocating bats emit trains of intense ultrasonic biosonar pulses and listen to weaker echoes returning from objects in their environment. Identification and categorization of echoes are crucial for orientation and prey capture. Bats are social animals and often fly in groups in which they are exposed to their own emissions and to those from other bats, as well as to echoes from multiple surrounding objects. Sound pressure levels in these noisy conditions can exceed 110 dB, with no obvious deleterious effects on echolocation performance. Psychophysical experiments show that big brown bats (Eptesicus fuscus) do not experience temporary threshold shifts after exposure to intense broadband ultrasonic noise, but it is not known if they make fine-scale adjustments in their pulse emissions to compensate for any effects of the noise. We investigated whether big brown bats adapt the number, temporal patterning or relative amplitude of their emitted pulses while flying through an acoustically cluttered corridor after exposure to intense broadband noise (frequency range 10-100 kHz; sound exposure level 152 dB). Under these conditions, four bats made no significant changes in navigation errors or in pulse number, timing and amplitude 20 min, 24 h or 48 h after noise exposure. These data suggest that big brown bats remain able to perform difficult echolocation tasks after exposure to ecologically realistic levels of broadband noise. © 2016. Published by The Company of Biologists Ltd.
Big Data, Deep Learning and Tianhe-2 at Sun Yat-Sen University, Guangzhou
NASA Astrophysics Data System (ADS)
Yuen, D. A.; Dzwinel, W.; Liu, J.; Zhang, K.
2014-12-01
In this decade the big data revolution has permeated in many fields, ranging from financial transactions, medical surveys and scientific endeavors, because of the big opportunities people see ahead. What to do with all this data remains an intriguing question. This is where computer scientists together with applied mathematicians have made some significant inroads in developing deep learning techniques for unraveling new relationships among the different variables by means of correlation analysis and data-assimilation methods. Deep-learning and big data taken together is a grand challenge task in High-performance computing which demand both ultrafast speed and large memory. The Tianhe-2 recently installed at Sun Yat-Sen University in Guangzhou is well positioned to take up this challenge because it is currently the world's fastest computer at 34 Petaflops. Each compute node of Tianhe-2 has two CPUs of Intel Xeon E5-2600 and three Xeon Phi accelerators. The Tianhe-2 has a very large fast memory RAM of 88 Gigabytes on each node. The system has a total memory of 1,375 Terabytes. All of these technical features will allow very high dimensional (more than 10) problem in deep learning to be explored carefully on the Tianhe-2. Problems in seismology which can be solved include three-dimensional seismic wave simulations of the whole Earth with a few km resolution and the recognition of new phases in seismic wave form from assemblage of large data sets.
Broadband noise exposure does not affect hearing sensitivity in big brown bats (Eptesicus fuscus).
Simmons, Andrea Megela; Hom, Kelsey N; Warnecke, Michaela; Simmons, James A
2016-04-01
In many vertebrates, exposure to intense sounds under certain stimulus conditions can induce temporary threshold shifts that reduce hearing sensitivity. Susceptibility to these hearing losses may reflect the relatively quiet environments in which most of these species have evolved. Echolocating big brown bats (Eptesicus fuscus) live in extremely intense acoustic environments in which they navigate and forage successfully, both alone and in company with other bats. We hypothesized that bats may have evolved a mechanism to minimize noise-induced hearing losses that otherwise could impair natural echolocation behaviors. The hearing sensitivity of seven big brown bats was measured in active echolocation and passive hearing tasks, before and after exposure to broadband noise spanning their audiometric range (10-100 kHz, 116 dB SPL re. 20 µPa rms, 1 h duration; sound exposure level 152 dB). Detection thresholds measured 20 min, 2 h or 24 h after exposure did not vary significantly from pre-exposure thresholds or from thresholds in control (sham exposure) conditions. These results suggest that big brown bats may be less susceptible to temporary threshold shifts than are other terrestrial mammals after exposure to similarly intense broadband sounds. These experiments provide fertile ground for future research on possible mechanisms employed by echolocating bats to minimize hearing losses while orienting effectively in noisy biological soundscapes. © 2016. Published by The Company of Biologists Ltd.
Big data in oncologic imaging.
Regge, Daniele; Mazzetti, Simone; Giannini, Valentina; Bracco, Christian; Stasi, Michele
2017-06-01
Cancer is a complex disease and unfortunately understanding how the components of the cancer system work does not help understand the behavior of the system as a whole. In the words of the Greek philosopher Aristotle "the whole is greater than the sum of parts." To date, thanks to improved information technology infrastructures, it is possible to store data from each single cancer patient, including clinical data, medical images, laboratory tests, and pathological and genomic information. Indeed, medical archive storage constitutes approximately one-third of total global storage demand and a large part of the data are in the form of medical images. The opportunity is now to draw insight on the whole to the benefit of each individual patient. In the oncologic patient, big data analysis is at the beginning but several useful applications can be envisaged including development of imaging biomarkers to predict disease outcome, assessing the risk of X-ray dose exposure or of renal damage following the administration of contrast agents, and tracking and optimizing patient workflow. The aim of this review is to present current evidence of how big data derived from medical images may impact on the diagnostic pathway of the oncologic patient.
Impunity: Countering Illicit Power in War and Transition
2016-05-01
Post -Conflict: The Lessons from Timor-Leste..........347 Deniz Kocak CHAPTER 17 A Granular Approach to Combating Corruption and Illicit Power Structures...transregional security,” and central to our task “is strengthening our global network of allies and partners.”4 In the current post -“Big Footprint” era...after the post -2001 political settlement, which was built on the distribution of political power between factions formed during the country’s civil war
Improving Big Data Visual Analytics with Interactive Virtual Reality
2015-05-22
gain a better understanding of data include scalable zooms, dynamic filtering, and anno - tation. Below, we describe some tasks that can be performed...pages 609–614. IEEE, 2014. [13] Matt R Fetterman, Zachary J Weber, Robert Freking, Alessio Volpe, D Scott, et al. Luminocity: a 3d printed, illuminated...Institute for Business Valueexecutive report, IBM Institute for Business Value, 2012. [24] James J Thomas. Illuminating the path:[the research and
Synchronization of Combat Power at the Task Force Level: Defining a Planning Methodology
1989-01-01
tachments) LOCATION SCT ARTY CAS AH SMOKE MORTAR FASCAM TI ME EST ACTUAL. Current Future LD PL 1 X Atk Porn PL 2 PL3 OBJ X PL 1 LD PL 2 FL3 OBJ PL 2 PL 1...learned in medical school to 138 identify the aorta only to arrive at St. Elsewhere, where they call it "the big blue boy .* Right now the US Army has a
Making Marble Tracks Can Involve Lots of Fun as Well as STEM Learning
ERIC Educational Resources Information Center
Nagel, Bert
2015-01-01
Marble tracks are a very popular toy and big ones can be found in science centres in many countries. If children want to make a marble track themselves it is quite a job. It takes a long time, they can take up a lot of space and most structures are quite fragile, as the materials used can very quickly prove unfit for the task and do not last very…
The Challenges of Human-Autonomy Teaming
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
Machine intelligence is improving rapidly based on advances in big data analytics, deep learning algorithms, networked operations, and continuing exponential growth in computing power (Moores Law). This growth in the power and applicability of increasingly intelligent systems will change the roles humans, shifting them to tasks where adaptive problem solving, reasoning and decision-making is required. This talk will address the challenges involved in engineering autonomous systems that function effectively with humans in aeronautics domains.
LeMonda, Brittany C.; Mahoney, Jeannette R.; Verghese, Joe; Holtzer, Roee
2016-01-01
The Walking While Talking (WWT) dual-task paradigm is a mobility stress test that predicts major outcomes, including falls, frailty, disability, and mortality in aging. Certain personality traits, such as neuroticism, extraversion, and their combination, have been linked to both cognitive and motor outcomes. We examined whether individual differences in personality dimensions of neuroticism and extraversion predicted dual-task performance decrements (both motor and cognitive) on a WWT task in non-demented older adults. We hypothesized that the combined effect of high neuroticism-low extraversion would be related to greater dual-task costs in gait velocity and cognitive performance in non-demented older adults. Participants (N = 295; age range, = 65–95 years; female = 164) completed the Big Five Inventory and WWT task involving concurrent gait and a serial 7's subtraction task. Gait velocity was obtained using an instrumented walkway. The high neuroticism-low extraversion group incurred greater dual-task costs (i.e., worse performance) in both gait velocity {95% confidence interval (CI) [−17.68 to −3.07]} and cognitive performance (95% CI [−19.34 to −2.44]) compared to the low neuroticism-high extraversion group, suggesting that high neuroticism-low extraversion interferes with the allocation of attentional resources to competing task demands during the WWT task. Older individuals with high neuroticism-low extraversion may be at higher risk for falls, mobility decline and other adverse outcomes in aging. PMID:26527241
AB022. Harnessing big data to transform clinical care of cardiovascular diseases
Cutiongco-de la Paz, Eva Maria
2015-01-01
Diseases of the heart and vascular system are the leading causes of mortality worldwide. A number of risk factors have already been identified such as obesity, diabetes and smoking; in the recent years, research has shifted its focus on genetic risk factors. Discoveries on the role of genes partnered with the technological developments have enabled advances in the understanding of human genetics and its influence on disease and treatment. There are initiatives now to combine medical records and genetic and other molecular data into a single “knowledge network” to achieve these aptly known as precision medicine. With next generation sequencing readily available at a more affordable cost, it is expected that genetic information of patients will be increasingly available and can be used to guide clinical decisions. Big data generated and stored necessitates broad and extensive interpretation to be valuable in clinical care. Accumulating evidence on the use of such genetic information in the cardiovascular clinics will be presented.
Salathé, Marcel
2016-12-01
The digital revolution has contributed to very large data sets (ie, big data) relevant for public health. The two major data sources are electronic health records from traditional health systems and patient-generated data. As the two data sources have complementary strengths-high veracity in the data from traditional sources and high velocity and variety in patient-generated data-they can be combined to build more-robust public health systems. However, they also have unique challenges. Patient-generated data in particular are often completely unstructured and highly context dependent, posing essentially a machine-learning challenge. Some recent examples from infectious disease surveillance and adverse drug event monitoring demonstrate that the technical challenges can be solved. Despite these advances, the problem of verification remains, and unless traditional and digital epidemiologic approaches are combined, these data sources will be constrained by their intrinsic limits. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.
The big freeze may be over: a contracting universe for cryopreservation?
Gale, Robert Peter; Ruiz-Argüelles, Guillermo J
2018-02-23
According to current cosmological theory, the universe will continue to expand indefinitely. If so, it should cool eventually reaching temperatures too cold to sustain life. This theory is commonly referred to as heat-death or the big freeze. Putting aside this potentially unpleasant scenario, unlikely in the lifetime of current readers (about 10 × E + 2500 years from now), freezing, in contrast, has played an important role in hematopoietic cell autotransplants for disease such as plasma cell myeloma and lymphomas. Let us consider how.
Causal inference and the data-fusion problem
Bareinboim, Elias; Pearl, Judea
2016-01-01
We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks. PMID:27382148
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuire, Austin D.; Meade, Roger Allen
As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimatemore » decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.« less
Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.
Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn
2017-12-01
The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
mdtmFTP and its evaluation on ESNET SDN testbed
Zhang, Liang; Wu, Wenji; DeMar, Phil; ...
2017-04-21
In this paper, to address the high-performance challenges of data transfer in the big data era, we are developing and implementing mdtmFTP: a high-performance data transfer tool for big data. mdtmFTP has four salient features. First, it adopts an I/O centric architecture to execute data transfer tasks. Second, it more efficiently utilizes the underlying multicore platform through optimized thread scheduling. Third, it implements a large virtual file mechanism to address the lots-of-small-files (LOSF) problem. In conclusion, mdtmFTP integrates multiple optimization mechanisms, including–zero copy, asynchronous I/O, pipelining, batch processing, and pre-allocated buffer pools–to enhance performance. mdtmFTP has been extensively tested andmore » evaluated within the ESNET 100G testbed. Evaluations show that mdtmFTP can achieve higher performance than existing data transfer tools, such as GridFTP, FDT, and BBCP.« less
Artificial intelligence, physiological genomics, and precision medicine.
Williams, Anna Marie; Liu, Yong; Regner, Kevin R; Jotterand, Fabrice; Liu, Pengyuan; Liang, Mingyu
2018-04-01
Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.
Use of big data in drug development for precision medicine
Kim, Rosa S.; Goossens, Nicolas; Hoshida, Yujin
2016-01-01
Summary Drug development has been a costly and lengthy process with an extremely low success rate and lack of consideration of individual diversity in drug response and toxicity. Over the past decade, an alternative “big data” approach has been expanding at an unprecedented pace based on the development of electronic databases of chemical substances, disease gene/protein targets, functional readouts, and clinical information covering inter-individual genetic variations and toxicities. This paradigm shift has enabled systematic, high-throughput, and accelerated identification of novel drugs or repurposed indications of existing drugs for pathogenic molecular aberrations specifically present in each individual patient. The exploding interest from the information technology and direct-to-consumer genetic testing industries has been further facilitating the use of big data to achieve personalized Precision Medicine. Here we overview currently available resources and discuss future prospects. PMID:27430024
Solomon, Marjorie; Ragland, J. Daniel; Niendam, Tara A.; Lesh, Tyler A.; Beck, Jonathan S.; Matter, John C.; Frank, Michael J.; Carter, Cameron S.
2015-01-01
Objective To investigate the neural mechanisms underlying impairments in generalizing learning shown by adolescents with autism spectrum disorder (ASD). Method Twenty-one high-functioning individuals with ASD aged 12–18 years, and 23 gender, IQ, and age-matched adolescents with typical development (TYP) completed a transitive inference (TI) task implemented using rapid event-related functional magnetic resonance imaging (fMRI). They were trained on overlapping pairs in a stimulus hierarchy of colored ovals where A>B>C>D>E>F and then tested on generalizing this training to new stimulus pairings (AF, BD, BE) in a “Big Game.” Whole-brain univariate, region of interest, and functional connectivity analyses were used. Results During training, TYP exhibited increased recruitment of the prefrontal cortex (PFC), while the group with ASD showed greater functional connectivity between the PFC and the anterior cingulate cortex (ACC). Both groups recruited the hippocampus and caudate comparably; however, functional connectivity between these regions was positively associated with TI performance for only the group with ASD. During the Big Game, TYP showed greater recruitment of the PFC, parietal cortex, and the ACC. Recruitment of these regions increased with age in the group with ASD. Conclusion During TI, TYP recruited cognitive control-related brain regions implicated in mature problem solving/reasoning including the PFC, parietal cortex, and ACC, while the group with ASD showed functional connectivity of the hippocampus and the caudate that was associated with task performance. Failure to reliably engage cognitive control-related brain regions may produce less integrated flexible learning in those with ASD unless they are provided with task support that in essence provides them with cognitive control, but this pattern may normalize with age. PMID:26506585
Margaret R. Metz; Kerri M. Frangioso; Ross K. Meentemeyer; David M. Rizzo
2012-01-01
Sudden oak death (SOD), caused by Phytophthora ramorum, is an emerging forest disease associated with extensive tree mortality in coastal California forests (Rizzo et al. 2005). P. ramorum is a generalist pathogen that infects many hosts, but hosts differ in their ability to transmit the disease...
The High-Throughput Analyses Era: Are We Ready for the Data Struggle?
D'Argenio, Valeria
2018-03-02
Recent and rapid technological advances in molecular sciences have dramatically increased the ability to carry out high-throughput studies characterized by big data production. This, in turn, led to the consequent negative effect of highlighting the presence of a gap between data yield and their analysis. Indeed, big data management is becoming an increasingly important aspect of many fields of molecular research including the study of human diseases. Now, the challenge is to identify, within the huge amount of data obtained, that which is of clinical relevance. In this context, issues related to data interpretation, sharing and storage need to be assessed and standardized. Once this is achieved, the integration of data from different -omic approaches will improve the diagnosis, monitoring and therapy of diseases by allowing the identification of novel, potentially actionably biomarkers in view of personalized medicine.
"Big data" and "open data": What kind of access should researchers enjoy?
Chatellier, Gilles; Varlet, Vincent; Blachier-Poisson, Corinne
2016-02-01
The healthcare sector is currently facing a new paradigm, the explosion of "big data". Coupled with advances in computer technology, the field of "big data" appears promising, allowing us to better understand the natural history of diseases, to follow-up new technologies (devices, drugs) implementation and to participate in precision medicine, etc. Data sources are multiple (medical and administrative data, electronic medical records, data from rapidly developing technologies such as DNA sequencing, connected devices, etc.) and heterogeneous while their use requires complex methods for accurate analysis. Moreover, faced with this new paradigm, we must determine who could (or should) have access to which data, how to combine collective interest and protection of personal data and how to finance in the long-term both operating costs and databases interrogation. This article analyses the opportunities and challenges related to the use of open and/or "big data", from the viewpoint of pharmacologists and representatives of the pharmaceutical and medical device industry. Copyright © 2016 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
O'Connor, Peter; Nguyen, Jessica; Anglim, Jeromy
2017-01-01
In this study, we investigated the validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF; Petrides, 2009) in the context of task-induced stress. We used a total sample of 225 volunteers to investigate (a) the incremental validity of the TEIQue-SF over other predictors of coping with task-induced stress, and (b) the construct validity of the TEIQue-SF by examining the mechanisms via which scores from the TEIQue-SF predict coping outcomes. Results demonstrated that the TEIQue-SF possessed incremental validity over the Big Five personality traits in the prediction of emotion-focused coping. Results also provided support for the construct validity of the TEIQue-SF by demonstrating that this measure predicted adaptive coping via emotion-focused channels. Specifically, results showed that, following a task stressor, the TEIQue-SF predicted low negative affect and high task performance via high levels of emotion-focused coping. Consistent with the purported theoretical nature of the trait emotional intelligence (EI) construct, trait EI as assessed by the TEIQue-SF primarily enhances affect and performance in stressful situations by regulating negative emotions.
Disease management, coping, and functional disability in pediatric sickle cell disease.
Oliver-Carpenter, Gloria; Barach, Ilana; Crosby, Lori E; Valenzuela, Jessica; Mitchell, Monica J
2011-02-01
Youth with sickle cell disease (SCD) experience chronic symptoms that significantly interfere with physical, academic, and social-emotional functioning. Thus, to effectively manage SCD, youth and caregivers must work collaboratively to ensure optimal functioning. The goal of the current study was to examine the level of involvement in disease management tasks for youth with SCD and their caregivers. The study also examined the relationship between involvement in disease management tasks, daily functioning, and coping skills. The study utilized collaborative care and disease management theoretical frameworks. Youth and caregivers participated in the study during an annual research and education day event. Forty-seven patients with SCD aged 6 to 18 years and their caregivers completed questionnaires examining level of involvement in disease management tasks, youth functional disability, and youth coping strategies. Caregivers also completed a demographic and medical history form. Parents and youth agreed that parents were significantly more involved in disease management tasks than youth, although level of involvement varied by task. Decreased parent involvement was related to greater coping strategies used by patients, including massage, prayer, and positive thinking. Higher functional disability (lower functioning) was related to greater parent involvement in disease management tasks, suggesting that greater impairment may encourage increased parent involvement. Health professionals working with families of youth with SCD should discuss with parents and youth how disease management tasks and roles will be shared and transferred during adolescence. Parents and youth may also benefit from a discussion of these issues within their own families.
Berner, Włodzimierz
2008-01-01
Acute infectious diseases of high intensity, i.e. typhus fever, typhoid fever, dysentery, followed by scarlet fever, measles, malaria, relapsing fever, whooping cough, diphtheria, smallpox and Asiatic cholera spreading after the World War I in Poland posed one of the most significant problems in the reviving country. Their incidence resulted not only from bad living conditions of the population but also from poor personal and environmental hygiene and lack of access to bacteriologically healthy drinking water. The Polish-Bolshevik war (1919-1920) as well as repatriation of war prisoners and the Polish population from Russia (its territory was a reservoir of numerous infectious diseases) and the return of large groups of displaced people contributed to spread of epidemics. Morbidity rate of acute infectious diseases was the highest in the big Polish cities, especially in Warsaw, Lodz, Lvov, Cracow and Vilnius. The Bureau of Chief Emergency Commissar for fighting against epidemics, which closely cooperated with other Polish sanitary institutions and international organisations, rendered the greatest service to the control of infectious diseases. Until the year 1924, the largest foci of diseases were controlled and their incidence decreased, what was possible after formation of sanitary posts along the eastern border of Poland, organisation of infectious disease hospitals, bath and disinfection centres in the country, and implementation of protective vaccinations.
Ramos-Casals, Manuel; Brito-Zerón, Pilar; Kostov, Belchin; Sisó-Almirall, Antoni; Bosch, Xavier; Buss, David; Trilla, Antoni; Stone, John H; Khamashta, Munther A; Shoenfeld, Yehuda
2015-08-01
Systemic autoimmune diseases (SADs) are a significant cause of morbidity and mortality worldwide, although their epidemiological profile varies significantly country by country. We explored the potential of the Google search engine to collect and merge large series (>1000 patients) of SADs reported in the Pubmed library, with the aim of obtaining a high-definition geoepidemiological picture of each disease. We collected data from 394,827 patients with SADs. Analysis showed a predominance of medical vs. administrative databases (74% vs. 26%), public health system vs. health insurance resources (88% vs. 12%) and patient-based vs. population-based designs (82% vs. 18%). The most unbalanced gender ratio was found in primary Sjögren syndrome (pSS), with nearly 10 females affected per 1 male, followed by systemic lupus erythematosus (SLE), systemic sclerosis (SSc) and antiphospholipid syndrome (APS) (ratio of nearly 5:1). Each disease predominantly affects a specific age group: children (Kawasaki disease, primary immunodeficiencies and Schonlein-Henoch disease), young people (SLE Behçet disease and sarcoidosis), middle-aged people (SSc, vasculitis and pSS) and the elderly (amyloidosis, polymyalgia rheumatica, and giant cell arteritis). We found significant differences in the geographical distribution of studies for each disease, and a higher frequency of the three SADs with available data (SLE, inflammatory myopathies and Kawasaki disease) in African-American patients. Using a "big data" approach enabled hitherto unseen connections in SADs to emerge. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of marker-free transgenic lettuce resistant to Mirafiori lettuce big-vein virus.
Kawazu, Yoichi; Fujiyama, Ryoi; Imanishi, Shunsuke; Fukuoka, Hiroyuki; Yamaguchi, Hirotaka; Matsumoto, Satoru
2016-10-01
Lettuce big-vein disease caused by Mirafiori lettuce big-vein virus (MLBVV) is found in major lettuce production areas worldwide, but highly resistant cultivars have not yet been developed. To produce MLBVV-resistant marker-free transgenic lettuce that would have a transgene with a promoter and terminator of lettuce origin, we constructed a two T-DNA binary vector, in which the first T-DNA contained the selectable marker gene neomycin phosphotransferase II, and the second T-DNA contained the lettuce ubiquitin gene promoter and terminator and inverted repeats of the coat protein (CP) gene of MLBVV. This vector was introduced into lettuce cultivars 'Watson' and 'Fuyuhikari' by Agrobacterium tumefaciens-mediated transformation. Regenerated plants (T0 generation) that were CP gene-positive by PCR analysis were self-pollinated, and 312 T1 lines were analyzed for resistance to MLBVV. Virus-negative plants were checked for the CP gene and the marker gene, and nine lines were obtained which were marker-free and resistant to MLBVV. Southern blot analysis showed that three of the nine lines had two copies of the CP gene, whereas six lines had a single copy and were used for further analysis. Small interfering RNAs, which are indicative of RNA silencing, were detected in all six lines. MLBVV infection was inhibited in all six lines in resistance tests performed in a growth chamber and a greenhouse, resulting in a high degree of resistance to lettuce big-vein disease. Transgenic lettuce lines produced in this study could be used as resistant cultivars or parental lines for breeding.
The rise of artificial intelligence and the uncertain future for physicians.
Krittanawong, C
2018-02-01
Physicians in everyday clinical practice are under pressure to innovate faster than ever because of the rapid, exponential growth in healthcare data. "Big data" refers to extremely large data sets that cannot be analyzed or interpreted using traditional data processing methods. In fact, big data itself is meaningless, but processing it offers the promise of unlocking novel insights and accelerating breakthroughs in medicine-which in turn has the potential to transform current clinical practice. Physicians can analyze big data, but at present it requires a large amount of time and sophisticated analytic tools such as supercomputers. However, the rise of artificial intelligence (AI) in the era of big data could assist physicians in shortening processing times and improving the quality of patient care in clinical practice. This editorial provides a glimpse at the potential uses of AI technology in clinical practice and considers the possibility of AI replacing physicians, perhaps altogether. Physicians diagnose diseases based on personal medical histories, individual biomarkers, simple scores (e.g., CURB-65, MELD), and their physical examinations of individual patients. In contrast, AI can diagnose diseases based on a complex algorithm using hundreds of biomarkers, imaging results from millions of patients, aggregated published clinical research from PubMed, and thousands of physician's notes from electronic health records (EHRs). While AI could assist physicians in many ways, it is unlikely to replace physicians in the foreseeable future. Let us look at the emerging uses of AI in medicine. Copyright © 2017 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Causal Inference from Big Data: Theoretical Foundations and the Data-fusion Problem
2015-06-01
both treatment and response. Some of these factors may be unmeasurable, such as genetic trait or lifestyle, while others are measurable, such as gender...several tasks in Artificial Intelligence [22, 23] and Statistics [24, 25] as well as in the empirical sciences (e.g., Genetics [26, 27], Economics [28...conditions are likely to be different. Special cases of transportability can be found in the literature under different rubrics such as “external validity
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science and Technology.
These hearings on international cooperation in science focused on three issues: (1) international cooperation in big science; (2) the impact of international cooperation on research priorities; and (3) coordination in management of international cooperative research. Witnesses presenting testimony and/or prepared statements were: Victor Weisskopf;…
Zhang, Jingyu; Li, Yongjuan; Wu, Changxu
2013-01-01
While much research has investigated the predictors of operators’ performance such as personality, attitudes and motivation in high-risk industries, its cognitive antecedents and boundary conditions have not been fully investigated. Based on a multilevel investigation of 312 nuclear power plant main control room operators from 50 shift teams, the present study investigated how general mental ability (GMA) at both individual and team level can influence task and safety performance. At the individual level, operators’ GMA was predictive of their task and safety performance and this trend became more significant as they accumulated more experience. At the team level, we found team GMA had positive influences on all three performance criteria. However, we also found a “big-fish-little-pond” effect insofar as team GMA had a relatively smaller effect and inhibited the contribution of individual GMA to workers’ extra-role behaviors (safety participation) compared to its clear beneficial influence on in-role behaviors (task performance and safety compliance). The possible mechanisms related to learning and social comparison processes are discussed. PMID:24391964
Computational dynamic approaches for temporal omics data with applications to systems medicine.
Liang, Yulan; Kelemen, Arpad
2017-01-01
Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail.
Altmann, Lori J. P.; Stegemöller, Elizabeth; Hazamy, Audrey A.; Wilson, Jonathan P.; Okun, Michael S.; McFarland, Nikolaus R.; Shukla, Aparna Wagle; Hass, Chris J.
2015-01-01
Background When performing two tasks at once, a dual task, performance on one or both tasks typically suffers. People with Parkinson’s disease (PD) usually experience larger dual task decrements on motor tasks than healthy older adults (HOA). Our objective was to investigate the decrements in cycling caused by performing cognitive tasks with a range of difficulty in people with PD and HOAs. Methods Twenty-eight participants with Parkinson’s disease and 20 healthy older adults completed a baseline cycling task with no secondary tasks and then completed dual task cycling while performing 12 tasks from six cognitive domains representing a wide range of difficulty. Results Cycling was faster during dual task conditions than at baseline, and was significantly faster for six tasks (all p<.02) across both groups. Cycling speed improved the most during the easiest cognitive tasks, and cognitive performance was largely unaffected. Cycling improvement was predicted by task difficulty (p<.001). People with Parkinson’s disease cycled slower (p<.03) and showed reduced dual task benefits (p<.01) than healthy older adults. Conclusions Unexpectedly, participants’ motor performance improved during cognitive dual tasks, which cannot be explained in current models of dual task performance. To account for these findings, we propose a model integrating dual task and acute exercise approaches which posits that cognitive arousal during dual tasks increases resources to facilitate motor and cognitive performance, which is subsequently modulated by motor and cognitive task difficulty. This model can explain both the improvement observed on dual tasks in the current study and more typical dual task findings in other studies. PMID:25970607
Hamada, Tsuyoshi; Keum, NaNa; Nishihara, Reiko; Ogino, Shuji
2017-03-01
Molecular pathological epidemiology (MPE) is an integrative field that utilizes molecular pathology to incorporate interpersonal heterogeneity of a disease process into epidemiology. In each individual, the development and progression of a disease are determined by a unique combination of exogenous and endogenous factors, resulting in different molecular and pathological subtypes of the disease. Based on "the unique disease principle," the primary aim of MPE is to uncover an interactive relationship between a specific environmental exposure and disease subtypes in determining disease incidence and mortality. This MPE approach can provide etiologic and pathogenic insights, potentially contributing to precision medicine for personalized prevention and treatment. Although breast, prostate, lung, and colorectal cancers have been among the most commonly studied diseases, the MPE approach can be used to study any disease. In addition to molecular features, host immune status and microbiome profile likely affect a disease process, and thus serve as informative biomarkers. As such, further integration of several disciplines into MPE has been achieved (e.g., pharmaco-MPE, immuno-MPE, and microbial MPE), to provide novel insights into underlying etiologic mechanisms. With the advent of high-throughput sequencing technologies, available genomic and epigenomic data have expanded dramatically. The MPE approach can also provide a specific risk estimate for each disease subgroup, thereby enhancing the impact of genome-wide association studies on public health. In this article, we present recent progress of MPE, and discuss the importance of accounting for the disease heterogeneity in the era of big-data health science and precision medicine.
Hamada, Tsuyoshi; Keum, NaNa; Nishihara, Reiko; Ogino, Shuji
2016-01-01
Molecular pathological epidemiology (MPE) is an integrative field that utilizes molecular pathology to incorporate interpersonal heterogeneity of a disease process into epidemiology. In each individual, the development and progression of a disease are determined by a unique combination of exogenous and endogenous factors, resulting in different molecular and pathological subtypes of the disease. Based on “the unique disease principle,” the primary aim of MPE is to uncover an interactive relationship between a specific environmental exposure and disease subtypes in determining disease incidence and mortality. This MPE approach can provide etiologic and pathogenic insights, potentially contributing to precision medicine for personalized prevention and treatment. Although breast, prostate, lung, and colorectal cancers have been among the most commonly studied diseases, the MPE approach can be used to study any disease. In addition to molecular features, host immune status and microbiome profile likely affect a disease process, and thus serve as informative biomarkers. As such, further integration of several disciplines into MPE has been achieved (e.g., pharmaco-MPE, immuno-MPE, and microbial MPE), to provide novel insights into underlying etiologic mechanisms. With the advent of high-throughput sequencing technologies, available genomic and epigenomic data have expanded dramatically. The MPE approach can also provide a specific risk estimate for each disease subgroup, thereby enhancing the impact of genome-wide association studies on public health. In this article, we present recent progress of MPE, and discuss the importance of accounting for the disease heterogeneity in the era of big-data health science and precision medicine. PMID:27738762
Parkinson's Disease Is Associated with Goal Setting Deficits during Task Switching
ERIC Educational Resources Information Center
Meiran, Nachshon; Friedman, Gilad; Yehene, Eynat
2004-01-01
Ten Parkinson's Disease (PD) patients and 10 control participants were tested using a task-switching paradigm in which there was a random task sequence, and the task was cued in every trial. Five PD patients showed a unique error profile. Their performance approximated guessing when accuracy was dependent on correct task identification, and was…
Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Wang, Li-Hua; Zhao, Xiao-Ping; Wu, Jia-Xin; Xie, Yang-Yang; Zhang, Yong-Hong
2017-11-01
With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adaptively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by traditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.
Reading at a distance: implications for the design of text in children's big books.
Hughes, Laura E; Wilkins, Arnold J
2002-06-01
Visual acuity, typically measured by the ability to name letters at a distance, is poorer when letters are small and closely spaced. It has been suggested that reading can be affected by letter size and spacing. To determine the effect of text size and spacing on the ability to read at a distance, with a view to helping with the design of text in children's 'Big Books'. The visual acuity of 200 children aged between 6 and 12 was measured. A subset of 66 children was given further reading tests. From a viewing distance of 3m children were required (1) to identify words and (2) to read passages of text rapidly. A repeated measures design was used to compare the effects of different size and spacing of text on performance of the two tasks. Performance improved when the spacing of words and size of letters was greater than is typical in 'Big Books'. For a given letter density, increasing the spacing improved performance more than increasing the letter size. The text in children's books could be made easier to read by expanding the spacing between words and also by increasing the size of the print. The maximum viewing distance should be reduced from 15ft (4.6m) to 10ft (3.0m).
Strouwen, Carolien; Molenaar, Esther A L M; Keus, Samyra H J; Münks, Liesbeth; Heremans, Elke; Vandenberghe, Wim; Bloem, Bastiaan R; Nieuwboer, Alice
2016-02-01
Impaired dual-task performance significantly impacts upon functional mobility in people with Parkinson's disease (PD). The aim of this study was to identify determinants of dual-task performance in people with PD in three different dual tasks to assess their possible task-dependency. We recruited 121 home-dwelling patients with PD (mean age 65.93 years; mean disease duration 8.67 years) whom we subjected to regular walking (control condition) and to three dual-task conditions: walking combined with a backwards Digit Span task, an auditory Stroop task and a Mobile Phone task. We measured dual-task gait velocity using the GAITRite mat and dual-task reaction times and errors on the concurrent tasks as outcomes. Motor, cognitive and descriptive variables which correlated to dual-task performance (p < 0.20) were entered into a stepwise forward multiple linear regression model. Single-task gait velocity and executive function, tested by the alternating intake test, was significantly associated with gait velocity during the Digit Span (R(2) = 0.65; p < 0.001), the Stroop (R(2) = 0.73; p < 0.001) and the Mobile Phone task (R(2) = 0.62; p < 0.001). In addition, disease severity proved correlated to gait velocity during the Stroop task. Age was a surplus determinant of gait velocity while using a mobile phone. Single-task gait velocity and executive function as measured by a verbal fluency switching task were independent determinants of dual-task gait performance in people with PD. In contrast to expectation, these factors were the same across different tasks, supporting the robustness of the findings. Future study needs to determine whether these factors predict dual-task abnormalities prospectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
MyGeneFriends: A Social Network Linking Genes, Genetic Diseases, and Researchers.
Allot, Alexis; Chennen, Kirsley; Nevers, Yannis; Poidevin, Laetitia; Kress, Arnaud; Ripp, Raymond; Thompson, Julie Dawn; Poch, Olivier; Lecompte, Odile
2017-06-16
The constant and massive increase of biological data offers unprecedented opportunities to decipher the function and evolution of genes and their roles in human diseases. However, the multiplicity of sources and flow of data mean that efficient access to useful information and knowledge production has become a major challenge. This challenge can be addressed by taking inspiration from Web 2.0 and particularly social networks, which are at the forefront of big data exploration and human-data interaction. MyGeneFriends is a Web platform inspired by social networks, devoted to genetic disease analysis, and organized around three types of proactive agents: genes, humans, and genetic diseases. The aim of this study was to improve exploration and exploitation of biological, postgenomic era big data. MyGeneFriends leverages conventions popularized by top social networks (Facebook, LinkedIn, etc), such as networks of friends, profile pages, friendship recommendations, affinity scores, news feeds, content recommendation, and data visualization. MyGeneFriends provides simple and intuitive interactions with data through evaluation and visualization of connections (friendships) between genes, humans, and diseases. The platform suggests new friends and publications and allows agents to follow the activity of their friends. It dynamically personalizes information depending on the user's specific interests and provides an efficient way to share information with collaborators. Furthermore, the user's behavior itself generates new information that constitutes an added value integrated in the network, which can be used to discover new connections between biological agents. We have developed MyGeneFriends, a Web platform leveraging conventions from popular social networks to redefine the relationship between humans and biological big data and improve human processing of biomedical data. MyGeneFriends is available at lbgi.fr/mygenefriends. ©Alexis Allot, Kirsley Chennen, Yannis Nevers, Laetitia Poidevin, Arnaud Kress, Raymond Ripp, Julie Dawn Thompson, Olivier Poch, Odile Lecompte. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.06.2017.
Doryńska, Agnieszka; Polak, Maciej; Kozela, Magdalena; Szafraniec, Krystyna; Piotrowski, Walerian; Bielecki, Wojciech; Drygas, Wojciech; Kozakiewicz, Krystyna; Piwoński, Jerzy; Tykarski, Andrzej; Zdrojewski, Tomasz; Pająk, Andrzej
2015-01-01
In Kraków, the second largest town in Poland, cardiovascular disease (CVD) mortality rate is lower than in most top largest towns in Poland and lower than the rate for total Polish population. The aim of the present analysis was to compare socioeconomic status (SES), prevalence of CVD risk factors and SCORE assessment of risk in Krakow with residents of other big towns in Poland and with general Polish population. We used data from the two large, population studies which used comparable methods for risk factors assessment: 1) Polish part of the HAPIEE Project in which 10 615 residents of Krakow at age between 45-69 years were examined, and (2) The WOBASZ Study which contributed with a sub-sample 6 888 of residents of Poland at corresponding age group. WOBASZ sample included 992 residents of big towns other than Krakow. Age-standardized proportions of persons with CVD risk factors were compared between Krakow and the other big towns in Poland and between Krakow and the whole Poland using χ2 test. The striking observation was that in Krakow proportions of participants with university education were substantially higher than average for the other big towns and the whole Poland. Also, the proportion of occupationally active men and women was the highest in Krakow. In both sexes, prevalence of smoking, hypercholesterolemia and hypertension in Krakow was similar to the other big towns but the prevalence of hypercholesterolemia and hypertension (in men only) was lower than average for Poland. The distribution by SCORE risk categories were similar in all three samples studied. In general, the distribution by BMI categories was less favourable but the prevalence of central obesity was lower among residents of Kraków than among residents of the other big towns and citizens of the whole Poland. Prevalence of diabetes was higher in Krakow than in the other samples studied. The differences between population of Krakow and population of other parts of Poland in the exposure to the main risk factors were found diverse and not big enough to be followed by differences in the distribution by the categories of SCORE risk assessment. The study suggested the importance of obesity and diabetes which are not used for the SCORE risk assessment and especially the importance of psychosocial and economic factors which may influence CVD risk and contribute more to the explanation of the regional differences in CVD mortality.
Simon, G G
2016-01-01
The neglected tropical diseases (NTDs) are the most common infections of humans in Sub-Saharan Africa. Virtually all of the population living below the World Bank poverty figure is affected by one or more NTDs. New evidence indicates a high degree of geographic overlap between the highest-prevalence NTDs (soil-transmitted helminths, schistosomiasis, onchocerciasis, lymphatic filariasis, and trachoma) and malaria and HIV, exhibiting a high degree of co-infection. Recent research suggests that NTDs can affect HIV and AIDS, tuberculosis (TB), and malaria disease progression. A combination of immunological, epidemiological, and clinical factors can contribute to these interactions and add to a worsening prognosis for people affected by HIV/AIDS, TB, and malaria. Together these results point to the impacts of the highest-prevalence NTDs on the health outcomes of malaria, HIV/AIDS, and TB and present new opportunities to design innovative public health interventions and strategies for these 'big three' diseases. This analysis describes the current findings of research and what research is still needed to strengthen the knowledge base of the impacts NTDs have on the big three. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
Disease Management, Coping, and Functional Disability in Pediatric Sickle Cell Disease
Oliver-Carpenter, Gloria; Barach, Ilana; Crosby, Lori E.; Valenzuela, Jessica; Mitchell, Monica J.
2016-01-01
Background Youth with sickle cell disease (SCD) experience chronic symptoms that significantly interfere with physical, academic, and social-emotional functioning. Thus, to effectively manage SCD, youth and caregivers must work collaboratively to ensure optimal functioning. The goal of the current study was to examine the level of involvement in disease management tasks for youth with SCD and their caregivers. The study also examined the relationship between involvement in disease management tasks, daily functioning, and coping skills. The study utilized collaborative care and disease management theoretical frameworks. Methods Youth and caregivers participated in the study during an annual research and education day event. Forty-seven patients with SCD aged 6 to 18 years and their caregivers completed questionnaires examining level of involvement in disease management tasks, youth functional disability and youth coping strategies. Caregivers also completed a demographic and medical history form. Results Parents and youth agreed that parents are significantly more involved in disease management tasks than youth, although level of involvement varied by task. Decreased parent involvement was related to greater coping strategies used by patients, including massage, prayer, and positive thinking. Higher functional disability (lower functioning) was related to greater parent involvement in disease management tasks, suggesting that greater impairment may encourage increased parent involvement. Conclusions Health professionals working with families of youth with SCD should discuss with parents and youth how disease management tasks and roles will be shared and transferred during adolescence. Parents and youth may also benefit from a discussion of these issues within their own families. PMID:21443065
Fox, Cynthia; Ebersbach, Georg; Ramig, Lorraine; Sapir, Shimon
2012-01-01
Recent advances in neuroscience have suggested that exercise-based behavioral treatments may improve function and possibly slow progression of motor symptoms in individuals with Parkinson disease (PD). The LSVT (Lee Silverman Voice Treatment) Programs for individuals with PD have been developed and researched over the past 20 years beginning with a focus on the speech motor system (LSVT LOUD) and more recently have been extended to address limb motor systems (LSVT BIG). The unique aspects of the LSVT Programs include the combination of (a) an exclusive target on increasing amplitude (loudness in the speech motor system; bigger movements in the limb motor system), (b) a focus on sensory recalibration to help patients recognize that movements with increased amplitude are within normal limits, even if they feel “too loud” or “too big,” and (c) training self-cueing and attention to action to facilitate long-term maintenance of treatment outcomes. In addition, the intensive mode of delivery is consistent with principles that drive activity-dependent neuroplasticity and motor learning. The purpose of this paper is to provide an integrative discussion of the LSVT Programs including the rationale for their fundamentals, a summary of efficacy data, and a discussion of limitations and future directions for research. PMID:22530161
Big Data and Health Economics: Strengths, Weaknesses, Opportunities and Threats.
Collins, Brendan
2016-02-01
'Big data' is the collective name for the increasing capacity of information systems to collect and store large volumes of data, which are often unstructured and time stamped, and to analyse these data by using regression and other statistical techniques. This is a review of the potential applications of big data and health economics, using a SWOT (strengths, weaknesses, opportunities, threats) approach. In health economics, large pseudonymized databases, such as the planned care.data programme in the UK, have the potential to increase understanding of how drugs work in the real world, taking into account adherence, co-morbidities, interactions and side effects. This 'real-world evidence' has applications in individualized medicine. More routine and larger-scale cost and outcomes data collection will make health economic analyses more disease specific and population specific but may require new skill sets. There is potential for biomonitoring and lifestyle data to inform health economic analyses and public health policy.
How Will Big Data Improve Clinical and Basic Research in Radiation Therapy?
Rosenstein, Barry S.; Capala, Jacek; Efstathiou, Jason A.; Hammerbacher, Jeff; Kerns, Sarah; Kong, Feng-Ming (Spring); Ostrer, Harry; Prior, Fred W.; Vikram, Bhadrasain; Wong, John; Xiao, Ying
2015-01-01
Historically, basic scientists and clinical researchers have transduced reality into data so that they might explain or predict the world. Because data are fundamental to their craft, these investigators have been on the front lines of the Big Data deluge in recent years. Radiotherapy data are complex and longitudinal data sets are frequently collected to track both tumor and normal tissue response to therapy. As basic, translational and clinical investigators explore with increasingly greater depth the complexity of underlying disease processes and treatment outcomes, larger sample populations are required for research studies and greater quantities of data are being generated. In addition, well-curated research and trial data are being pooled in public data repositories to support large-scale analyses. Thus, the tremendous quantity of information produced in both basic and clinical research in radiation therapy can now be considered as having entered the realm of Big Data. PMID:26797542
Salathé, Marcel
2016-01-01
The digital revolution has contributed to very large data sets (ie, big data) relevant for public health. The two major data sources are electronic health records from traditional health systems and patient-generated data. As the two data sources have complementary strengths—high veracity in the data from traditional sources and high velocity and variety in patient-generated data—they can be combined to build more-robust public health systems. However, they also have unique challenges. Patient-generated data in particular are often completely unstructured and highly context dependent, posing essentially a machine-learning challenge. Some recent examples from infectious disease surveillance and adverse drug event monitoring demonstrate that the technical challenges can be solved. Despite these advances, the problem of verification remains, and unless traditional and digital epidemiologic approaches are combined, these data sources will be constrained by their intrinsic limits. PMID:28830106
Vilaplana, Cristina; Cardona, Pere-Joan
2014-01-01
This short review explores the large gap between clinical issues and basic science, and suggests why tuberculosis research should focus on redirect the immune system and not only on eradicating Mycobacterium tuberculosis bacillus. Along the manuscript, several concepts involved in human tuberculosis are explored in order to understand the big picture, including infection and disease dynamics, animal modeling, liquefaction, inflammation and immunomodulation. Scientists should take into account all these factors in order to answer questions with clinical relevance. Moreover, the inclusion of the concept of a strong inflammatory response being required in order to develop cavitary tuberculosis disease opens a new field for developing new therapeutic and prophylactic tools in which destruction of the bacilli may not necessarily be the final goal. PMID:24592258
Vilaplana, Cristina; Cardona, Pere-Joan
2014-01-01
This short review explores the large gap between clinical issues and basic science, and suggests why tuberculosis research should focus on redirect the immune system and not only on eradicating Mycobacterium tuberculosis bacillus. Along the manuscript, several concepts involved in human tuberculosis are explored in order to understand the big picture, including infection and disease dynamics, animal modeling, liquefaction, inflammation and immunomodulation. Scientists should take into account all these factors in order to answer questions with clinical relevance. Moreover, the inclusion of the concept of a strong inflammatory response being required in order to develop cavitary tuberculosis disease opens a new field for developing new therapeutic and prophylactic tools in which destruction of the bacilli may not necessarily be the final goal.
Corrosion casts of big bubbles formed during deep anterior lamellar keratoplasty.
Feizi, Sepehr; Kanavi, Mozhgan Rezaei; Kharaghani, Davood; Balagholi, Sahar; Meskinfam, Masoumeh; Javadi, Mohammad Ali
2016-11-01
To characterize the walls of big bubbles formed during deep anterior lamellar keratoplasty (DALK) using the corrosion casting technique. Fresh corneoscleral buttons with normal transparency and without any known eye diseases (n = 11) were obtained from 11 human donors. A 20-gauge needle was used to inject a solution of 20 % polyvinyl alcohol (PVA) immediately beneath the corneal endothelium to form big bubbles in eight corneoscleral buttons. In the second experiment on three corneoscleral buttons, a big bubble was first formed by air injection beneath the endothelium. Thereafter, 20 % PVA was injected into the bubble space. Scanning electron microscopy was used to characterize the surfaces of the casts, which replicated the walls of the big bubbles. A type-1 bubble was formed in all corneas. In one cornea, one type-1 bubble was initially formed centrally, and while it was enlarged, an eccentric type-2 bubble appeared. Scanning electron microscopy showed that the casts of type-1 bubbles had two distinct surfaces. The anterior surface demonstrated several holes or pits, depending on the material used for the bubble formation, whereas the posterior surface exhibited an uneven surface. The anterior and posterior surfaces of the type-2 cast were more or less similar. A communication measuring 531.9 µm in length and 171.4 µm in diameter was found between the two bubbles. The corrosion casting technique provides a permanent three-dimensional record of the potential spaces and barriers in the posterior corneal stroma, which explains several features associated with big-bubble DALK.
A small step for science, a big one for commerce.
Birkett, Liam
2005-01-01
The excellent work that is being performed in medical science advances is to be admired and applauded. In each case the quest is for perfection and to bring the task in hand to its final solution. Along the way there are milestones being passed that may be overlooked, as to their particular merits, because the eyes are focused all the time on the ultimate goal. The conference highlights so many areas of interest and endeavour some of which parallel, duplicate, overlap and/or compliment others.
Cloud Based Metalearning System for Predictive Modeling of Biomedical Data
Vukićević, Milan
2014-01-01
Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data. PMID:24892101
NASA Astrophysics Data System (ADS)
Poyato, David; Soler, Juan
2016-09-01
The study of human behavior is a complex task, but modeling some aspects of this behavior is an even more complicated and exciting idea. From crisis management to decision making in evacuation protocols, understanding the complexity of humans in stress situations is more and more demanded in our society by obvious reasons [5,6,8,12]. In this context, [4] deals with crowd dynamics with special attention to evacuation.
2012-08-01
ELECTRONICS AND ARCHITECTURE (VEA) MINI-SYMPOSIUM AUGUST 14-16, TROY MICHIGAN Performance of an Embedded Platform Aggregating and Executing...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) UBT Technologies,3250 W. Big Beaver Rd.,Ste. 329, Troy ,MI...Technology Symposium August 14-16 Troy , Michigan 14. ABSTRACT The Vehicular Integration for C4ISR/EW Interoperability (VICTORY) Standard adopts many
Reserve Component General and Flag Officers: A Review of Requirements and Authorized Strength
2016-01-01
authorized RC G/FOs is constantly changing. The exemp- tions, as they are written, give the reserves a great deal of needed flexibility. In Chap - ter...communities appear on the list (e.g., the Chief of Judges of U.S. Army Legal Services, the Assistant and Deputy Chiefs of Chap - lains, and Assistant...executing a turnaround 3. high-responsibility tasks with wide latitude—jobs with big “stakes,” both orga- nizationally and personally, and wide scope, in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera, Zaylis Zayas; Bond, Essex
2013-11-22
My Coop at LLNL has been my first professional experience as an Electrical Engineer. I was tasked to carry out signal processing to analyze data and code in the IDL following standard software development principles. The Coop has met all of my needs to continue my professional career, and I feel more confident as I continue working as a student and professional. It is now a big open question for me as whether to pursue graduate research or industry after I graduate with my B.S. in Electrical Engineering.
Managing laboratory automation
Saboe, Thomas J.
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed. PMID:18925018
Cowan, D
2001-01-01
Trying to gain a measure of control over their working lives, some physicians are abandoning large group practices for smaller groups. Large groups enjoy whole teams of people performing vital business tasks. Small practices rely on one or two key physicians and managers to tackle everything from customer service to marketing, medical records to human resources. Learn valuable tips for thriving in a small environment and using that extra control to achieve job satisfaction.
Study of an Alternative Career Path for Deck Officers in the Hellenic Navy
2013-03-01
Motivated individuals stay with a task long enough to achieve their goal. (pp. 72-73) Perhaps, the most well-known theory of motivation is Abraham Maslow’s...hierarchy of needs.” Maslow hypothesized that, within every human being, there exists a hierarchy of five needs. These are shown in Figure 3...actualization. Maslow separated the five steps into higher and lower order needs. The big difference between higher and lower order needs is that higher
Cognitive Models for Learning to Control Dynamic Systems
2008-05-30
2 3N NM NM NMK NK M− + + + + constraints, including KN M+ equality constraints, 7 2NM M+ inequality non- timing constraints and the rest are... inequality timing constraints. The size of the MILP model grows rapidly with the increase of problem size. So it is a big challenge to deal with more...task requirement, are studied in the section. An assumption is made in advance that the time of attack delay and flight time to the sink node are
Managing laboratory automation.
Saboe, T J
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.
Motivating Reluctant Learners with a Big Bang
NASA Technical Reports Server (NTRS)
Lochner, James C.; Cvetic, Geraldine A.; Hall, Jonathan B.
2007-01-01
We present results of a collaboration between a media specialist, a science teacher, and an astronomer to bring a modern astronomy topic to at-risk, emotionally disabled students who have experienced little success. These normally unengaged students became highly motivated because they were given an authentic task of presenting research on an intriguing science topic, and because they witnessed a collaboration brought together on their behalf This experience demonstrates that sophisticated astronomy topics can be used to motivate at-risk students.
Is It Really Self-Control? Examining the Predictive Power of the Delay of Gratification Task
Duckworth, Angela L.; Tsukayama, Eli; Kirby, Teri A.
2013-01-01
This investigation tests whether the predictive power of the delay of gratification task (colloquially known as the “marshmallow test”) derives from its assessment of self-control or of theoretically unrelated traits. Among 56 school-age children in Study 1, delay time was associated with concurrent teacher ratings of self-control and Big Five conscientiousness—but not with other personality traits, intelligence, or reward-related impulses. Likewise, among 966 preschool children in Study 2, delay time was consistently associated with concurrent parent and caregiver ratings of self-control but not with reward-related impulses. While delay time in Study 2 was also related to concurrently measured intelligence, predictive relations with academic, health, and social outcomes in adolescence were more consistently explained by ratings of effortful control. Collectively, these findings suggest that delay task performance may be influenced by extraneous traits, but its predictive power derives primarily from its assessment of self-control. PMID:23813422
2013-01-01
Due to an increase in severity of cases of rat lungworm disease and increased media attention, community outreach efforts on the island of Hawai‘i (the Big Island) were revisited in 2009, to include an updated flier, radio interviews, and community presentations. The Puna district of the island has been impacted the greatest by rat lungworm disease. The biggest challenge in disseminating information was that residents could not accept that limited information, testing, and treatment options were available. Some people wanted basic information while others requested great detail. Some responded better to information in “pidgin” but others preferred English. Another challenge was to provide information to communities where residents did not read newspapers or watch television news. As a result, a community education group formed and assisted in disseminating information to these communities. But some residents never received information and there has been no decrease in cases. Information must be sent repeatedly and through different media, including free journals, local community newspapers, local television stations, and even social networking.
Lowry, Kristin A; Carrel, Andrew J; McIlrath, Jessica M; Smiley-Oyen, Ann L
2010-04-01
To determine if gait stability, as measured by harmonic ratios (HRs) derived from trunk accelerations, is improved during 3 amplitude-based cueing strategies (visual cues, lines on the floor 20% longer than preferred step length; verbal cues, experimenter saying "big step" every third; cognitive cues, participants think "big step") in people with Parkinson's disease. Gait analysis with a triaxial accelerometer. University research laboratory. A volunteer sample of persons with Parkinson's disease (N=7) (Hoehn and Yahr stages 2-3). Not applicable Gait stability was quantified by anterior-posterior (AP), vertical, and mediolateral (ML) HRs; higher ratios indicated improved gait stability. Spatiotemporal parameters assessed were walking speed, stride length, cadence, and the coefficient of variation for stride time. Of the amplitude-based cues, verbal and cognitive resulted in the largest improvements in the AP HR (P=.018) with a trend in the vertical HR as well as the largest improvements in both stride length and velocity. None of the cues positively affected stability in the ML direction. Descriptively, all participants increased speed and stride length, but only those in Hoehn and Yahr stage 2 (not Hoehn and Yahr stage 3) showed improvements in HRs. Cueing for "big steps" is effective for improving gait stability in the AP direction with modest improvements in the vertical direction, but it is not effective in the ML direction. These data support the use of trunk acceleration measures in assessing the efficacy of common therapeutic interventions. Copyright 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Systems and precision medicine approaches to diabetes heterogeneity: a Big Data perspective.
Capobianco, Enrico
2017-12-01
Big Data, and in particular Electronic Health Records, provide the medical community with a great opportunity to analyze multiple pathological conditions at an unprecedented depth for many complex diseases, including diabetes. How can we infer on diabetes from large heterogeneous datasets? A possible solution is provided by invoking next-generation computational methods and data analytics tools within systems medicine approaches. By deciphering the multi-faceted complexity of biological systems, the potential of emerging diagnostic tools and therapeutic functions can be ultimately revealed. In diabetes, a multidimensional approach to data analysis is needed to better understand the disease conditions, trajectories and the associated comorbidities. Elucidation of multidimensionality comes from the analysis of factors such as disease phenotypes, marker types, and biological motifs while seeking to make use of multiple levels of information including genetics, omics, clinical data, and environmental and lifestyle factors. Examining the synergy between multiple dimensions represents a challenge. In such regard, the role of Big Data fuels the rise of Precision Medicine by allowing an increasing number of descriptions to be captured from individuals. Thus, data curations and analyses should be designed to deliver highly accurate predicted risk profiles and treatment recommendations. It is important to establish linkages between systems and precision medicine in order to translate their principles into clinical practice. Equivalently, to realize their full potential, the involved multiple dimensions must be able to process information ensuring inter-exchange, reducing ambiguities and redundancies, and ultimately improving health care solutions by introducing clinical decision support systems focused on reclassified phenotypes (or digital biomarkers) and community-driven patient stratifications.
The webinar was requested by the Justus-Warren Heart Disease and Stroke Prevention Task Force. From their website, “The task force was established in 1995 in North Carolina to provide statewide leadership for the prevention and management of cardiovascular disease. Meetings are...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A
2013-01-01
n this paper, we present an overview of the big data chal- lenges in disease bio-surveillance and then discuss the use of visual analytics for integrating data and turning it into knowl- edge. We will explore two integration scenarios: (1) combining text and multimedia sources to improve situational awareness and (2) enhancing disease spread model data with real-time bio-surveillance data. Together, the proposed integration methodologies can improve awareness about when, where and how emerging diseases can affect wide geographic regions.
[Madagascar: public health situation on the "Big Island" at the beginning of the 21st century].
Andrianarisoa, A C E; Rakotoson, J; Randretsa, M; Rakotondravelo, S; Rakotoarimanana, R D; Rakotomizao, J; Aubry, P
2007-02-01
The main public health issue in Madagascar at the beginning of the 21st century still involves transmissible infectious diseases including re-emerging diseases such as bubonic plague and emerging diseases such as HIV/AIDS, dengue fever and Chikungunya virus infection. Health and hygiene especially access to clean water is still poor especially in rural areas. No improvement in the public health situation with regard to malaria, schistomosomiais or cysticercosis as well as non-infectious diseases such as protein-energy malnutrition is expected within the next decade.
Griffen, Edward J; Dossetter, Alexander G; Leach, Andrew G; Montague, Shane
2018-03-22
AI comes to lead optimization: medicinal chemistry in all disease areas can be accelerated by exploiting our pre-competitive knowledge in an unbiased way. Copyright © 2018 Elsevier Ltd. All rights reserved.
To Your Health: NLM update transcript - NIH MedlinePlus magazine Winter 2018
... who is a star of 'The Big Bang Theory' television show, and the producer/narrator of a ... trials, NIH MedlinePlus magazine reports the current life expectancy of a person with sickle cell disease is ...
Mental-orientation: A new approach to assessing patients across the Alzheimer's disease spectrum.
Peters-Founshtein, Gregory; Peer, Michael; Rein, Yanai; Kahana Merhavi, Shlomzion; Meiner, Zeev; Arzy, Shahar
2018-05-21
This study aims to assess the role of mental-orientation in the diagnosis of mild cognitive impairment and Alzheimer's disease using a novel task. A behavioral study (Experiment 1) compared the mental-orientation task to standard neuropsychological tests in patients across the Alzheimer's disease spectrum. A functional MRI study (Experiment 2) in young adults compared activations evoked by the mental-orientation and standard-orientation tasks as well as their overlap with brain regions susceptible to Alzheimer's disease pathology. The mental-orientation task differentiated mild cognitively impaired and healthy controls at 95% accuracy, while the Addenbrooke's Cognitive Examination, Mini-Mental State Examination and standard-orientation achieved 74%, 70% and 50% accuracy, respectively. Functional MRI revealed the mental-orientation task to preferentially recruit brain regions exhibiting early Alzheimer's-related atrophy, unlike the standard-orientation test. Mental-orientation is suggested to play a key role in Alzheimer's disease, and consequently in early detection and follow-up of patients along the Alzheimer's disease spectrum. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Emotional reactivity and awareness of task performance in Alzheimer's disease.
Mograbi, Daniel C; Brown, Richard G; Salas, Christian; Morris, Robin G
2012-07-01
Lack of awareness about performance in tasks is a common feature of Alzheimer's disease. Nevertheless, clinical anecdotes have suggested that patients may show emotional or behavioural responses to the experience of failure despite reporting limited awareness, an aspect which has been little explored experimentally. The current study investigated emotional reactions to success or failure in tasks despite unawareness of performance in Alzheimer's disease. For this purpose, novel computerised tasks which expose participants to systematic success or failure were used in a group of Alzheimer's disease patients (n=23) and age-matched controls (n=21). Two experiments, the first with reaction time tasks and the second with memory tasks, were carried out, and in each experiment two parallel tasks were used, one in a success condition and one in a failure condition. Awareness of performance was measured comparing participant estimations of performance with actual performance. Emotional reactivity was assessed with a self-report questionnaire and rating of filmed facial expressions. In both experiments the results indicated that, relative to controls, Alzheimer's disease patients exhibited impaired awareness of performance, but comparable differential reactivity to failure relative to success tasks, both in terms of self-report and facial expressions. This suggests that affective valence of failure experience is processed despite unawareness of task performance, which might indicate implicit processing of information in neural pathways bypassing awareness. Copyright © 2012 Elsevier Ltd. All rights reserved.
[Big data, medical language and biomedical terminology systems].
Schulz, Stefan; López-García, Pablo
2015-08-01
A variety of rich terminology systems, such as thesauri, classifications, nomenclatures and ontologies support information and knowledge processing in health care and biomedical research. Nevertheless, human language, manifested as individually written texts, persists as the primary carrier of information, in the description of disease courses or treatment episodes in electronic medical records, and in the description of biomedical research in scientific publications. In the context of the discussion about big data in biomedicine, we hypothesize that the abstraction of the individuality of natural language utterances into structured and semantically normalized information facilitates the use of statistical data analytics to distil new knowledge out of textual data from biomedical research and clinical routine. Computerized human language technologies are constantly evolving and are increasingly ready to annotate narratives with codes from biomedical terminology. However, this depends heavily on linguistic and terminological resources. The creation and maintenance of such resources is labor-intensive. Nevertheless, it is sensible to assume that big data methods can be used to support this process. Examples include the learning of hierarchical relationships, the grouping of synonymous terms into concepts and the disambiguation of homonyms. Although clear evidence is still lacking, the combination of natural language technologies, semantic resources, and big data analytics is promising.
Kimble, L P
2001-01-01
Household tasks are highly salient physical activities for women. Inability to perform household tasks may serve as an important marker of limitations imposed by cardiac symptoms. The purpose of this study was to examine the impact of cardiac symptoms on perceived ability to perform household tasks in women with coronary artery disease and to examine relationships among age, whether the woman lived alone, ability to perform household tasks, and cardiac-related quality of life. Forty-one women with confirmed diagnosis of coronary artery disease and a mean age of 66 years (SD 12 years) were interviewed about the impact of their cardiac symptoms and perceived ability to perform household tasks (Household Activities Scale) and cardiac-related quality of life (Seattle Angina Questionnaire). The women were primarily white (89.4%) and retired (65.9%). Forty-six percent were married, and 26.8% lived alone. "Washing dishes" (51.3%) was the only task a majority of the sample could perform without limitation. Household tasks most commonly reported as no longer performed included carrying laundry (24.4%), vacuuming (30.0%), and scrubbing the floor (51.2%). The task most commonly modified because of cardiac symptoms was changing bed linens (60%). Of the 14 household tasks, women performed a mean of 3.39 (SD 3.36) activities without difficulty. Total number of household activities performed without difficulty was associated with better quality of life in the area of exertional capacity (r = 0.50, P = 0.001). Women who lived alone reported greater perceived ability to perform household tasks than women who did not live alone (r = 0.31, P = 0.05). Age was not significantly associated with perceived household task performance (r = -0.22, P = 0.17). Women with coronary artery disease (CAD) perceived cardiac symptoms as disrupting their ability to perform household tasks. Future research is needed to determine the independent impact of cardiac symptoms on functional limitations, especially in older women with heart disease, and whether changes in ability to perform household tasks could be a marker for coronary artery disease progression in women.
A framework of space weather satellite data pipeline
NASA Astrophysics Data System (ADS)
Ma, Fuli; Zou, Ziming
Various applications indicate a need of permanent space weather information. The diversity of available instruments enables a big variety of products. As an indispensable part of space weather satellite operation system, space weather data processing system is more complicated than before. The information handled by the data processing system has been used in more and more fields such as space weather monitoring and space weather prediction models. In the past few years, many satellites have been launched by China. The data volume downlinked by these satellites has achieved the so-called big data level and it will continue to grow fast in the next few years due to the implementation of many new space weather programs. Because of the huge amount of data, the current infrastructure is no longer incapable of processing data timely, so we proposed a new space weather data processing system (SWDPS) based on the architecture of cloud computing. Similar to Hadoop, SWDPS decomposes the tasks into smaller tasks which will be executed by many different work nodes. Control Center in SWDPS, just like NameNode and JobTracker within Hadoop which is the bond between the data and the cluster, will establish work plan for the cluster once a client submits data. Control Center will allocate node for the tasks and the monitor the status of all tasks. As the same of TaskTrakcer, Compute Nodes in SWDPS are the salves of Control Center which are responsible for calling the plugins(e.g., dividing and sorting plugins) to execute the concrete jobs. They will also manage all the tasks’ status and report them to Control Center. Once a task fails, a Compute Node will notify Control Center. Control Center decides what to do then; it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may even blacklist the Compute Node as unreliable. In addition to these modules, SWDPS has a different module named Data Service which is used to provide file operations such as adding, deleting, modifying and querying for the clients. Beyond that Data Service can also split and combine files based on the timestamp of each record. SWDPS has been used for quite some time and it has been successfully dealt with many satellites, such as FY1C, FY1D, FY2A, FY2B, etc. The good performance in actual operation shows that SWDPS is stable and reliable.
Big Bang Day : Afternoon Play - Torchwood: Lost Souls
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2009-10-13
Martha Jones, ex-time traveller and now working as a doctor for a UN task force, has been called to CERN where they're about to activate the Large Hadron Collider. Once activated, the Collider will fire beams of protons together recreating conditions a billionth of a second after the Big Bang - and potentially allowing the human race a greater insight into what the Universe is made of. But so much could go wrong - it could open a gateway to a parallel dimension, or create a black hole - and now voices from the past are calling out to peoplemore » and scientists have started to disappear... Where have the missing scientists gone? What is the secret of the glowing man? What is lurking in the underground tunnel? And do the dead ever really stay dead? Lost Souls is a spin-off from the award-winning BBC Wales TV production Torchwood. It stars John Barrowman, Freema Agyeman, Eve Myles, Gareth David-Lloyd, Lucy Montgomery (of Titty Bang Bang) and Stephen Critchlow.« less
Big Bang Day : Afternoon Play - Torchwood: Lost Souls
None
2017-12-09
Martha Jones, ex-time traveller and now working as a doctor for a UN task force, has been called to CERN where they're about to activate the Large Hadron Collider. Once activated, the Collider will fire beams of protons together recreating conditions a billionth of a second after the Big Bang - and potentially allowing the human race a greater insight into what the Universe is made of. But so much could go wrong - it could open a gateway to a parallel dimension, or create a black hole - and now voices from the past are calling out to people and scientists have started to disappear... Where have the missing scientists gone? What is the secret of the glowing man? What is lurking in the underground tunnel? And do the dead ever really stay dead? Lost Souls is a spin-off from the award-winning BBC Wales TV production Torchwood. It stars John Barrowman, Freema Agyeman, Eve Myles, Gareth David-Lloyd, Lucy Montgomery (of Titty Bang Bang) and Stephen Critchlow.
Linked Data: Forming Partnerships at the Data Layer
NASA Astrophysics Data System (ADS)
Shepherd, A.; Chandler, C. L.; Arko, R. A.; Jones, M. B.; Hitzler, P.; Janowicz, K.; Krisnadhi, A.; Schildhauer, M.; Fils, D.; Narock, T.; Groman, R. C.; O'Brien, M.; Patton, E. W.; Kinkade, D.; Rauch, S.
2015-12-01
The challenges presented by big data are straining data management software architectures of the past. For smaller existing data facilities, the technical refactoring of software layers become costly to scale across the big data landscape. In response to these challenges, data facilities will need partnerships with external entities for improved solutions to perform tasks such as data cataloging, discovery and reuse, and data integration and processing with provenance. At its surface, the concept of linked open data suggests an uncalculated altruism. Yet, in his concept of five star open data, Tim Berners-Lee explains the strategic costs and benefits of deploying linked open data from the perspective of its consumer and producer - a data partnership. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) addresses some of the emerging needs of its research community by partnering with groups doing complementary work and linking their respective data layers using linked open data principles. Examples will show how these links, explicit manifestations of partnerships, reduce technical debt and provide a swift flexibility for future considerations.
Measuring Nursing Value from the Electronic Health Record.
Welton, John M; Harper, Ellen M
2016-01-01
We report the findings of a big data nursing value expert group made up of 14 members of the nursing informatics, leadership, academic and research communities within the United States tasked with 1. Defining nursing value, 2. Developing a common data model and metrics for nursing care value, and 3. Developing nursing business intelligence tools using the nursing value data set. This work is a component of the Big Data and Nursing Knowledge Development conference series sponsored by the University Of Minnesota School Of Nursing. The panel met by conference calls for fourteen 1.5 hour sessions for a total of 21 total hours of interaction from August 2014 through May 2015. Primary deliverables from the bit data expert group were: development and publication of definitions and metrics for nursing value; construction of a common data model to extract key data from electronic health records; and measures of nursing costs and finance to provide a basis for developing nursing business intelligence and analysis systems.
Effects of aging and job demands on cognitive flexibility assessed by task switching.
Gajewski, Patrick D; Wild-Wall, Nele; Schapkin, Sergei A; Erdmann, Udo; Freude, Gabriele; Falkenstein, Michael
2010-10-01
In a cross-sectional, electrophysiological study 91 workers of a big car factory performed a series of switch tasks to assess their cognitive control functions. Four groups of workers participated in the study: 23 young and 23 middle aged assembly line employees and 22 young and 23 middle aged employees with flexible job demands like service and maintenance. Participants performed three digit categorisation tasks. In addition to single task blocks, a cue-based (externally guided) and a memory-based (internally guided) task switch block was administered. Compared to young participants, older ones showed the typical RT-decline. No differences between younger and older participants regarding the local switch costs could be detected despite the source of the current task information. In contrast, whereas the groups did not differ in mixing costs in the cued condition, clear performance decrements in the memory-based mixing block were observed in the group of older employees with repetitive work demands. These findings were corroborated by a number of electrophysiological results showing a reduced CNV suggesting an impairment of task specific preparation, an attenuated P3b suggesting reduced working memory capacity and a decreased Ne suggesting deficits in error monitoring in older participants with repetitive job demands. The results are compatible with the assumption that long lasting, unchallenging job demands may induce several neurocognitive impairments which are already evident in the early fifties. Longitudinal studies are needed to confirm this assumption. Copyright © 2010 Elsevier B.V. All rights reserved.
Precision medicine for managing chronic diseases.
Śliwczynski, Andrzej; Orlewska, Ewa
2016-08-18
Precision medicine (PM) is an important modern paradigm for combining new types of metrics with big medical datasets to create prediction models for prevention, diagnosis, and specific therapy of chronic diseases. The aim of this paper was to differentiate PM from personalized medicine, to show potential benefits of PM for managing chronic diseases, and to define problems with implementation of PM into clinical practice. PM strategies in chronic airway diseases, diabetes, and cardiovascular diseases show that the key to developing PM is the addition of big datasets to the course of individually profiling diseases and patients. Integration of PM into clinical practice requires the reengineering of the health care infrastructure by incorporating necessary tools for clinicians and patients to enable data collection and analysis, interpretation of the results, as well as to facilitate treatment choices based on new understanding of biological pathways. The size of datasets and their large variability pose a considerable technical and statistical challenge. The potential benefits of using PM are as follows: 1) broader possibilities for physicians to use the achievements of genomics, proteomics, metabolomics, and other "omics" disciplines in routine clinical practice; 2) better understanding of the pathogenesis and epidemiology of diseases; 3) a revised approach to prevention, diagnosis, and treatment of chronic diseases; 4) better integration of electronic medical records as well as data from sensors and software applications in an interactive network of knowledge aimed at improving the modelling and testing of therapeutic and preventative strategies, stimulating further research, and spreading information to the general public.
Vollmann, Manja; Pukrop, Jörg; Salewski, Christel
2016-04-01
A rheumatic disease can severely impair a person's quality of life. The degree of impairment, however, is not closely related to objective indicators of disease severity. This study investigated the influence and the interplay of core psychological factors, i.e., personality and coping, on life satisfaction in patients with rheumatic diseases. Particularly, it was tested whether coping mediates the effects of personality on life satisfaction. In a cross-sectional design, 158 patients diagnosed with a rheumatic disease completed questionnaires assessing the Big 5 personality traits (BFI-10), several disease-related coping strategies (EFK) and life satisfaction (HSWBS). Data were analyzed using a complex multiple mediation analysis with the Big 5 personality traits as predictors, coping strategies as mediators and life satisfaction as outcome. All personality traits and seven of the nine coping strategies were associated with life satisfaction (rs > |0.16|, ps ≤ 0.05). The mediation analysis revealed that personality traits had no direct, but rather indirect effects on life satisfaction through coping. Neuroticism had a negative indirect effect on life satisfaction through less active problem solving and more depressive coping (indirect effects > -0.03, ps < 0.05). Extraversion, agreeableness, and conscientiousness had positive indirect effects on life satisfaction through more active problem solving, less depressive coping and/or a more active search for social support (indirect effects > 0.06, ps < 0.05). Personality and coping play a role in adjustment to rheumatic diseases. The interplay of these variables should be considered in psychological interventions for patients with rheumatic diseases.
A Fast Projection-Based Algorithm for Clustering Big Data.
Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong
2018-06-07
With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.
Social Media, Big Data, and Mental Health: Current Advances and Ethical Implications.
Conway, Mike; O'Connor, Daniel
2016-06-01
Mental health (including substance abuse) is the fifth greatest contributor to the global burden of disease, with an economic cost estimated to be US $2.5 trillion in 2010, and expected to double by 2030. Developing information systems to support and strengthen population-level mental health monitoring forms a core part of the World Health Organization's Comprehensive Action Plan 2013-2020. In this paper, we review recent work that utilizes social media "big data" in conjunction with associated technologies like natural language processing and machine learning to address pressing problems in population-level mental health surveillance and research, focusing both on technological advances and core ethical challenges.
Big Data Transforms Discovery-Utilization Therapeutics Continuum.
Waldman, S A; Terzic, A
2016-03-01
Enabling omic technologies adopt a holistic view to produce unprecedented insights into the molecular underpinnings of health and disease, in part, by generating massive high-dimensional biological data. Leveraging these systems-level insights as an engine driving the healthcare evolution is maximized through integration with medical, demographic, and environmental datasets from individuals to populations. Big data analytics has accordingly emerged to add value to the technical aspects of storage, transfer, and analysis required for merging vast arrays of omic-, clinical-, and eco-datasets. In turn, this new field at the interface of biology, medicine, and information science is systematically transforming modern therapeutics across discovery, development, regulation, and utilization. © 2015 ASCPT.
2010-01-01
Background The purpose of the work reported here is to test reliable molecular profiles using routinely processed formalin-fixed paraffin-embedded (FFPE) tissues from participants of the clinical trial BIG 1-98 with a median follow-up of 60 months. Methods RNA from fresh frozen (FF) and FFPE tumor samples of 82 patients were used for quality control, and independent FFPE tissues of 342 postmenopausal participants of BIG 1-98 with ER-positive cancer were analyzed by measuring prospectively selected genes and computing scores representing the functions of the estrogen receptor (eight genes, ER_8), the progesterone receptor (five genes, PGR_5), Her2 (two genes, HER2_2), and proliferation (ten genes, PRO_10) by quantitative reverse transcription PCR (qRT-PCR) on TaqMan Low Density Arrays. Molecular scores were computed for each category and ER_8, PGR_5, HER2_2, and PRO_10 scores were combined into a RISK_25 score. Results Pearson correlation coefficients between FF- and FFPE-derived scores were at least 0.94 and high concordance was observed between molecular scores and immunohistochemical data. The HER2_2, PGR_5, PRO_10 and RISK_25 scores were significant predictors of disease free-survival (DFS) in univariate Cox proportional hazard regression. PRO_10 and RISK_25 scores predicted DFS in patients with histological grade II breast cancer and in lymph node positive disease. The PRO_10 and PGR_5 scores were independent predictors of DFS in multivariate Cox regression models incorporating clinical risk indicators; PRO_10 outperformed Ki-67 labeling index in multivariate Cox proportional hazard analyses. Conclusions Scores representing the endocrine responsiveness and proliferation status of breast cancers were developed from gene expression analyses based on RNA derived from FFPE tissues. The validation of the molecular scores with tumor samples of participants of the BIG 1-98 trial demonstrates that such scores can serve as independent prognostic factors to estimate disease free survival (DFS) in postmenopausal patients with estrogen receptor positive breast cancer. Trial Registration Current Controlled Trials: NCT00004205 PMID:20144231
Masso, Majid; Vaisman, Iosif I
2014-01-01
The AUTO-MUTE 2.0 stand-alone software package includes a collection of programs for predicting functional changes to proteins upon single residue substitutions, developed by combining structure-based features with trained statistical learning models. Three of the predictors evaluate changes to protein stability upon mutation, each complementing a distinct experimental approach. Two additional classifiers are available, one for predicting activity changes due to residue replacements and the other for determining the disease potential of mutations associated with nonsynonymous single nucleotide polymorphisms (nsSNPs) in human proteins. These five command-line driven tools, as well as all the supporting programs, complement those that run our AUTO-MUTE web-based server. Nevertheless, all the codes have been rewritten and substantially altered for the new portable software, and they incorporate several new features based on user feedback. Included among these upgrades is the ability to perform three highly requested tasks: to run "big data" batch jobs; to generate predictions using modified protein data bank (PDB) structures, and unpublished personal models prepared using standard PDB file formatting; and to utilize NMR structure files that contain multiple models.
Functional MR imaging and traumatic paraplegia: preliminary report.
Sabbah, P; Lévêque, C; Pfefer, F; Nioche, C; Gay, S; Sarrazin, J L; Barouti, H; Tadie, M; Cordoliani, Y S
2000-12-01
To evaluate residual activity in the sensorimotor cortex of the lower limbs in paraplegia. 5 patients suffering from a complete paralysis after traumatic medullar lesion (ASIA=A). Clinical evaluation of motility and sensitivity. 1. Control functional MR study of the sensorimotor cortex during simultaneous movements of hands, imaginary motor task and passive hands stimulation. 2. Concerning the lower limbs, 3 fMRI conditions: 1-patient attempts to move his toes with flexion-extension, 2-mental imagery task of the same movement, 3-peripheral passive proprio-somesthesic stimulation (squeezing) of the big toes. Activations were observed in the primary sensorimotor cortex (M1), premotor regions and in the supplementary motor area (SMA) during movement and mental imaginary tasks in the control study and during attempt to move and mental imaginary tasks in the study concerning the lower limbs. Passive somesthesic stimulation generated activation posterior to the central sulcus for 2 patients. Activations in the sensorimotor cortex of the lower limbs can be generated either by attempting to move or mental evocation. In spite of a clinical evaluation of complete paraplegia, fMRI can show a persistence of sensitive anatomic conduction, confirmed by Somesthesic Evoked Potentials.
Update of the BIG 1-98 Trial: where do we stand?
Joerger, Markus; Thürlimann, Beat
2009-10-01
There is accumulating data on the clinical benefit of aromatase inhibitors in the adjuvant treatment of early-stage breast cancer in postmenopausal women. The Breast International Group (BIG) 1-98 study is a randomized, phase 3, double-blind trial comparing four adjuvant endocrine treatments of 5 years duration in postmenopausal women with hormone-receptor-positive breast cancer: letrozole or tamoxifen monotherapy, sequential treatment with tamoxifen followed by letrozole, or vice versa. This article summarizes data presented at the 2009 St. Gallen early breast cancer conference: an update on the monotherapy arms of the BIG 1-98 study, and results from the sequential treatment arms. Implications for daily practice from BIG 1-98 and from other adjuvant trials will be discussed. Despite cross-over from tamoxifen to letrozole by 25% of the patients after unblinding of the tamoxifen monotherapy arm, the improvement of disease-free survival (HR 0.88, 0.78-0.99, p = 0.03) and time to distant recurrence (HR 0.85, 0.72-1.00, p = 0.05) for letrozole monotherapy as compared to tamoxifen monotherapy remained significant in the intention-to-treat (ITT) analysis. A trend for an overall survival advantage for letrozole was seen in the ITT analysis (HR 0.87, 0.75-1.02, p = 0.08). No statistically significant differences were found for the sequential treatment arms versus letrozole monotherapy, with respect to disease-free survival, time to distant recurrence or overall survival. Cumulative incidence analysis of breast cancer recurrence favors the initiation of adjuvant endocrine treatment with letrozole instead of tamoxifen, especially in patients at higher risk for early recurrence. Similarly, data suggest that patients commenced on letrozole can be switched to tamoxifen after 2 years, if required. The BIG 1-98 study update with median follow up of 76 months confirms a significant reduction in the risk of breast cancer recurrence and a trend towards improved overall survival with letrozole as compared to tamoxifen, and no unexpected safety concerns with letrozole. Adjuvant endocrine treatment should preferentially be initiated with letrozole. For patients unable to continue letrozole, switching to tamoxifen appears to be an acceptable alternative.
Cummings, J; Fox, N; Vellas, B; Aisen, P; Shan, G
2018-01-01
Disease-modifying therapies are urgently needed for the treatment of Alzheimer's disease (AD). The European Union/United States (EU/US) Task Force represents a broad range of stakeholders including biopharma industry personnel, academicians, and regulatory authorities. The EU/US Task Force represents a community of knowledgeable individuals who can inform views of evidence supporting disease modification and the development of disease-modifying therapies (DMTs). We queried their attitudes toward clinical trial design and biomarkers in support of DMTs. A survey of members of the EU/US Alzheimer's Disease Task Force was conducted. Ninety-three members (87%) responded. The details were analyzed to understand what clinical trial design and biomarker data support disease modification. Task Force members favored the parallel group design compared to delayed start or staggered withdrawal clinical trial designs to support disease modification. Amyloid biomarkers were regarded as providing mild support for disease modification while tau biomarkers were regarded as providing moderate support. Combinations of biomarkers, particularly combinations of tau and neurodegeneration, were regarded as providing moderate to marked support for disease modification and combinations of all three classes of biomarkers were regarded by a majority as providing marked support for disease modification. Task Force members considered that evidence derived from clinical trials and biomarkers supports clinical meaningfulness of an intervention, and when combined with a single clinical trial outcome, nearly all regarded the clinical trial design or biomarker evidence as supportive of disease modification. A minority considered biomarker evidence by itself as indicative of disease modification in prevention trials. Levels of evidence (A,B,C) were constructed based on these observations. The survey indicates the view of knowledgeable stakeholders regarding evidence derived from clinical trial design and biomarkers in support of disease modification. Results of this survey can assist in designing clinical trials of DMTs.
Moore, Marianne S; Field, Kenneth A; Behr, Melissa J; Turner, Gregory G; Furze, Morgan E; Stern, Daniel W F; Allegra, Paul R; Bouboulis, Sarah A; Musante, Chelsey D; Vodzak, Megan E; Biron, Matthew E; Meierhofer, Melissa B; Frick, Winifred F; Foster, Jeffrey T; Howell, Daryl; Kath, Joseph A; Kurta, Allen; Nordquist, Gerda; Johnson, Joseph S; Lilley, Thomas M; Barrett, Benjamin W; Reeder, DeeAnn M
2018-01-01
The devastating bat fungal disease, white-nose syndrome (WNS), does not appear to affect all species equally. To experimentally determine susceptibility differences between species, we exposed hibernating naïve little brown myotis (Myotis lucifugus) and big brown bats (Eptesicus fuscus) to the fungus that causes WNS, Pseudogymnoascus destructans (Pd). After hibernating under identical conditions, Pd lesions were significantly more prevalent and more severe in little brown myotis. This species difference in pathology correlates with susceptibility to WNS in the wild and suggests that survival is related to different host physiological responses. We observed another fungal infection, associated with neutrophilic inflammation, that was equally present in all bats. This suggests that both species are capable of generating a response to cold tolerant fungi and that Pd may have evolved mechanisms for evading host responses that are effective in at least some bat species. These host-pathogen interactions are likely mediated not just by host physiological responses, but also by host behavior. Pd-exposed big brown bats, the less affected species, spent more time in torpor than did control animals, while little brown myotis did not exhibit this change. This differential thermoregulatory response to Pd infection by big brown bat hosts may allow for a more effective (or less pathological) immune response to tissue invasion.
Building Vietnamese Herbal Database Towards Big Data Science in Nature-Based Medicine
2018-01-04
metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species, 10887 metabolites...plants, metabolites, diseases, and geography in order to convey a composite description of each individual species. VHO consists of 2881 species...feature description are extremely diverse and highly redundant. Besides the original words or the key words for description , there are millions of
Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny
2017-01-01
The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred
2015-04-01
The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.
Functional connectomics from a "big data" perspective.
Xia, Mingrui; He, Yong
2017-10-15
In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.
Advancing Alzheimer's research: A review of big data promises.
Zhang, Rui; Simon, Gyorgy; Yu, Fang
2017-10-01
To review the current state of science using big data to advance Alzheimer's disease (AD) research and practice. In particular, we analyzed the types of research foci addressed, corresponding methods employed and study findings reported using big data in AD. Systematic review was conducted for articles published in PubMed from January 1, 2010 through December 31, 2015. Keywords with AD and big data analytics were used for literature retrieval. Articles were reviewed and included if they met the eligibility criteria. Thirty-eight articles were included in this review. They can be categorized into seven research foci: diagnosing AD or mild cognitive impairment (MCI) (n=10), predicting MCI to AD conversion (n=13), stratifying risks for AD (n=5), mining the literature for knowledge discovery (n=4), predicting AD progression (n=2), describing clinical care for persons with AD (n=3), and understanding the relationship between cognition and AD (n=3). The most commonly used datasets are AD Neuroimaging Initiative (ADNI) (n=16), electronic health records (EHR) (n=11), MEDLINE (n=3), and other research datasets (n=8). Logistic regression (n=9) and support vector machine (n=8) are the most used methods for data analysis. Big data are increasingly used to address AD-related research questions. While existing research datasets are frequently used, other datasets such as EHR data provide a unique, yet under-utilized opportunity for advancing AD research. Copyright © 2017 Elsevier B.V. All rights reserved.
Batista Rodríguez, Gabriela; Balla, Andrea; Corradetti, Santiago; Martinez, Carmen; Hernández, Pilar; Bollo, Jesús; Targarona, Eduard M
2018-06-01
"Big data" refers to large amount of dataset. Those large databases are useful in many areas, including healthcare. The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) and the National Inpatient Sample (NIS) are big databases that were developed in the USA in order to record surgical outcomes. The aim of the present systematic review is to evaluate the type and clinical impact of the information retrieved through NISQP and NIS big database articles focused on laparoscopic colorectal surgery. A systematic review was conducted using The Meta-Analysis Of Observational Studies in Epidemiology (MOOSE) guidelines. The research was carried out on PubMed database and revealed 350 published papers. Outcomes of articles in which laparoscopic colorectal surgery was the primary aim were analyzed. Fifty-five studies, published between 2007 and February 2017, were included. Articles included were categorized in groups according to the main topic as: outcomes related to surgical technique comparisons, morbidity and perioperatory results, specific disease-related outcomes, sociodemographic disparities, and academic training impact. NSQIP and NIS databases are just the tip of the iceberg for the potential application of Big Data technology and analysis in MIS. Information obtained through big data is useful and could be considered as external validation in those situations where a significant evidence-based medicine exists; also, those databases establish benchmarks to measure the quality of patient care. Data retrieved helps to inform decision-making and improve healthcare delivery.
Assessing Teachers' Comprehension of What Matters in Earth Science
NASA Astrophysics Data System (ADS)
Penuel, W. R.; Kreikemeier, P.; Venezky, D.; Blank, J. G.; Davatzes, A.; Davatzes, N.
2006-12-01
Curricular standards developed for individual U.S. States tell teachers what they should teach. Most sets of standards are too numerous to be taught in a single year, forcing teachers to make decisions about what to emphasize in their curriculum. Ideally, such decisions would be based on what matters most in Earth science, namely, the big ideas that anchor scientific inquiry in the field. A measure of teachers' ability to associate curriculum standards with fundamental concepts in Earth science would help K-12 program and curriculum developers to bridge gaps in teachers' knowledge in order to help teachers make better decisions about what is most important to teach and communicate big ideas to students. This paper presents preliminary results of an attempt to create and validate a measure of teachers' comprehension of what matters in three sub-disciplines of Earth science. This measure was created as part of an experimental study of teacher professional development in Earth science. It is a task that requires teachers to take their state's curriculum standards and identify which standards are necessary or supplemental to developing students' understanding of fundamental concepts in the target sub-disciplines. To develop the task, a team of assessment experts and educational researchers asked a panel of four Earth scientists to identify key concepts embedded within middle school standards for the state of Florida. The Earth science panel reached a consensus on which standards needed to be taught in order to develop understanding of those concepts; this was used as a basis for comparison with teacher responses. Preliminary analysis of the responses of 44 teachers who participated in a pilot validation study identified differences between teachers' and scientists' maps of standards to big ideas in the sub-disciplines. On average, teachers identified just under one-third of the connections seen by expert Earth scientists between the concepts and their state standards. Teachers with higher levels of agreement also had a higher percentage of standards identified that were "off-grade," meaning that they saw connections to standards that they were not themselves required to teach but that nonetheless were relevant to developing student understanding of a particular concept. This result is consistent with the premise that to make good decisions about what to teach, teachers need to be able to identify relevant standards from other grade levels that are connected to the big ideas of a discipline (Shulman, 1986, Educ. Res. 15:4-14).
Effect of divided attention on gait in subjects with and without cognitive impairment.
Pettersson, Anna F; Olsson, Elisabeth; Wahlund, Lars-Olof
2007-03-01
The aim of this study was to investigate the influence of cognition on motor function using 2 simple everyday tasks, talking and walking, in younger subjects with Alzheimer's disease and mild cognitive impairment. A second aim was to evaluate reliability for the dual-task test Talking While Walking. Walking speed during single and dual task and time change between single and dual task were compared between groups. The test procedure was repeated after 1 week. Subjects with AD had lower walking speed and greater time change between single and dual task compared with healthy controls. Reliability for Talking While Walking was very good. The results show that motor function in combination with a cognitive task, as well as motor function alone, influences subjects with Alzheimer's disease in a negative way and that decreased walking speed during single- and dual-task performance may be an early symptom in Alzheimer's disease.
Brain tumor image segmentation using kernel dictionary learning.
Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H
2015-08-01
Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.
Parkinson's disease and dopaminergic therapy—differential effects on movement, reward and cognition
Hughes, L.; Ghosh, B. C. P.; Eckstein, D.; Williams-Gray, C. H.; Fallon, S.; Barker, R. A.; Owen, A. M.
2008-01-01
Cognitive deficits are very common in Parkinson's disease particularly for ‘executive functions’ associated with frontal cortico-striatal networks. Previous work has identified deficits in tasks that require attentional control like task-switching, and reward-based tasks like gambling or reversal learning. However, there is a complex relationship between the specific cognitive problems faced by an individual patient, their stage of disease and dopaminergic treatment. We used a bimodality continuous performance task during fMRI to examine how patients with Parkinson's disease represent the prospect of reward and switch between competing task rules accordingly. The task-switch was not separately cued but was based on the implicit reward relevance of spatial and verbal dimensions of successive compound stimuli. Nineteen patients were studied in relative ‘on’ and ‘off’ states, induced by dopaminergic medication withdrawal (Hoehn and Yahr stages 1–4). Patients were able to successfully complete the task and establish a bias to one or other dimension in order to gain reward. However the lateral prefrontal cortex and caudate nucleus showed a non-linear U-shape relationship between motor disease severity and regional brain activation. Dopaminergic treatment led to a shift in this U-shape function, supporting the hypothesis of differential neurodegeneration in separate motor and cognitive cortico–striato–thalamo–cortical circuits. In addition, anterior cingulate activation associated with reward expectation declined with more severe disease, whereas activation following actual rewards increased with more severe disease. This may facilitate a change in goal-directed behaviours from deferred predicted rewards to immediate actual rewards, particularly when on dopaminergic treatment. We discuss the implications for investigation and optimal treatment of this common condition at different stages of disease. PMID:18577547
Sharma, Gaurav K; Mahajan, Sonalika; Matura, Rakesh; Subramaniam, Saravanan; Mohapatra, Jajati K; Pattnaik, Bramhadev
2014-11-01
Differentiation of Foot-and-Mouth Disease infected from vaccinated animals is essential for effective implementation of vaccination based control programme. Detection of antibodies against 3ABC non-structural protein of FMD virus by immunodiagnostic assays provides reliable indication of FMD infection. Sero-monitoring of FMD in the large country like India is a big task where thousands of serum samples are annually screened. Currently, monoclonal or polyclonal antibodies are widely used in these immunodiagnostic assays. Considering the large population of livestock in the country, an economical and replenishable alternative of these antibodies was required. In this study, specific short chain variable fragment (scFv) antibody against 3B region of 3ABC poly-protein was developed. High level of scFv expression in Escherichia coli system was obtained by careful optimization in four different strains. Two formats of enzyme immunoassays (sandwich and competitive ELISAs) were optimized using scFv with objective to differentiate FMD infected among the vaccinated population. The assays were statistically validated by testing 2150 serum samples. Diagnostic sensitivity/specificity of sandwich and competitive ELISAs were determined by ROC method as 92.2%/95.5% and 89.5%/93.5%, respectively. This study demonstrated that scFv is a suitable alternate for immunodiagnosis of FMD on large scale. Copyright © 2014 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
From big data analysis to personalized medicine for all: challenges and opportunities.
Alyass, Akram; Turcotte, Michelle; Meyre, David
2015-06-27
Recent advances in high-throughput technologies have led to the emergence of systems biology as a holistic science to achieve more precise modeling of complex diseases. Many predict the emergence of personalized medicine in the near future. We are, however, moving from two-tiered health systems to a two-tiered personalized medicine. Omics facilities are restricted to affluent regions, and personalized medicine is likely to widen the growing gap in health systems between high and low-income countries. This is mirrored by an increasing lag between our ability to generate and analyze big data. Several bottlenecks slow-down the transition from conventional to personalized medicine: generation of cost-effective high-throughput data; hybrid education and multidisciplinary teams; data storage and processing; data integration and interpretation; and individual and global economic relevance. This review provides an update of important developments in the analysis of big data and forward strategies to accelerate the global transition to personalized medicine.
Center of excellence for mobile sensor data-to-knowledge (MD2K).
Kumar, Santosh; Abowd, Gregory D; Abraham, William T; al'Absi, Mustafa; Beck, J Gayle; Chau, Duen Horng; Condie, Tyson; Conroy, David E; Ertin, Emre; Estrin, Deborah; Ganesan, Deepak; Lam, Cho; Marlin, Benjamin; Marsh, Clay B; Murphy, Susan A; Nahum-Shani, Inbal; Patrick, Kevin; Rehg, James M; Sharmin, Moushumi; Shetty, Vivek; Sim, Ida; Spring, Bonnie; Srivastava, Mani; Wetter, David W
2015-11-01
Mobile sensor data-to-knowledge (MD2K) was chosen as one of 11 Big Data Centers of Excellence by the National Institutes of Health, as part of its Big Data-to-Knowledge initiative. MD2K is developing innovative tools to streamline the collection, integration, management, visualization, analysis, and interpretation of health data generated by mobile and wearable sensors. The goal of the big data solutions being developed by MD2K is to reliably quantify physical, biological, behavioral, social, and environmental factors that contribute to health and disease risk. The research conducted by MD2K is targeted at improving health through early detection of adverse health events and by facilitating prevention. MD2K will make its tools, software, and training materials widely available and will also organize workshops and seminars to encourage their use by researchers and clinicians. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Machine Learning for Knowledge Extraction from PHR Big Data.
Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George
2014-01-01
Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.
Katende, Godfrey; Donnelly, Mary
2016-05-01
In terms of disease burden, many low- and middle-income countries are currently experiencing a transition from infectious to chronic diseases. In Uganda, non-communicable diseases (NCDs) have increased significantly in recent years; this challenge is compounded by the healthcare worker shortage and the underfunded health system administration. Addressing the growing prevalence of NCDs requires evidence-based policies and strategies to reduce morbidity and mortality rates; however, the integration and evaluation of new policies and processes pose many challenges. Task-shifting is the process whereby specific tasks are transferred to health workers with less training and fewer qualifications. Successful implementation of a task-shifting policy requires appropriate skill training, clearly defined roles, adequate evaluation, an enhanced training capacity and sufficient health worker incentives. This article focuses on task-shifting policy as a potentially effective strategy to address the growing burden of NCDs on the Ugandan healthcare system.
Repeated cognitive stimulation alleviates memory impairments in an Alzheimer's disease mouse model.
Martinez-Coria, Hilda; Yeung, Stephen T; Ager, Rahasson R; Rodriguez-Ortiz, Carlos J; Baglietto-Vargas, David; LaFerla, Frank M
2015-08-01
Alzheimer's disease is a neurodegenerative disease associated with progressive memory and cognitive decline. Previous studies have identified the benefits of cognitive enrichment on reducing disease pathology. Additionally, epidemiological and clinical data suggest that repeated exercise, and cognitive and social enrichment, can improve and/or delay the cognitive deficiencies associated with aging and neurodegenerative diseases. In the present study, 3xTg-AD mice were exposed to a rigorous training routine beginning at 3 months of age, which consisted of repeated training in the Morris water maze spatial recognition task every 3 months, ending at 18 months of age. At the conclusion of the final Morris water maze training session, animals subsequently underwent testing in another hippocampus-dependent spatial task, the Barnes maze task, and on the more cortical-dependent novel object recognition memory task. Our data show that periodic cognitive enrichment throughout aging, via multiple learning episodes in the Morris water maze task, can improve the memory performance of aged 3xTg-AD mice in a separate spatial recognition task, and in a preference memory task, when compared to naïve aged matched 3xTg-AD mice. Furthermore, we observed that the cognitive enrichment properties of Morris water maze exposer, was detectable in repeatedly trained animals as early as 6 months of age. These findings suggest early repeated cognitive enrichment can mitigate the diverse cognitive deficits observed in Alzheimer's disease. Published by Elsevier Inc.
Solomon, Marjorie; Ragland, J Daniel; Niendam, Tara A; Lesh, Tyler A; Beck, Jonathan S; Matter, John C; Frank, Michael J; Carter, Cameron S
2015-11-01
To investigate the neural mechanisms underlying impairments in generalizing learning shown by adolescents with autism spectrum disorder (ASD). A total of 21 high-functioning individuals with ASD aged 12 to 18 years, and 23 gender-, IQ-, and age-matched adolescents with typical development (TYP), completed a transitive inference (TI) task implemented using rapid event-related functional magnetic resonance imaging (fMRI). Participants were trained on overlapping pairs in a stimulus hierarchy of colored ovals where A>B>C>D>E>F and then tested on generalizing this training to new stimulus pairings (AF, BD, BE) in a "Big Game." Whole-brain univariate, region of interest, and functional connectivity analyses were used. During training, the TYP group exhibited increased recruitment of the prefrontal cortex (PFC), whereas the group with ASD showed greater functional connectivity between the PFC and the anterior cingulate cortex (ACC). Both groups recruited the hippocampus and caudate comparably; however, functional connectivity between these regions was positively associated with TI performance for only the group with ASD. During the Big Game, the TYP group showed greater recruitment of the PFC, parietal cortex, and the ACC. Recruitment of these regions increased with age in the group with ASD. During TI, TYP individuals recruited cognitive control-related brain regions implicated in mature problem solving/reasoning including the PFC, parietal cortex, and ACC, whereas the group with ASD showed functional connectivity of the hippocampus and the caudate that was associated with task performance. Failure to reliably engage cognitive control-related brain regions may produce less integrated flexible learning in individuals with ASD unless they are provided with task support that, in essence, provides them with cognitive control; however, this pattern may normalize with age. Copyright © 2015 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Ambure, Pravin; Bhat, Jyotsna; Puzyn, Tomasz; Roy, Kunal
2018-04-23
Alzheimer's disease (AD) is a multi-factorial disease, which can be simply outlined as an irreversible and progressive neurodegenerative disorder with an unclear root cause. It is a major cause of dementia in old aged people. In the present study, utilizing the structural and biological activity information of ligands for five important and mostly studied vital targets (i.e. cyclin-dependant kinase 5, β-secretase, monoamine oxidase B, glycogen synthase kinase 3β, acetylcholinesterase) that are believed to be effective against AD, we have developed five classification models using linear discriminant analysis (LDA) technique. Considering the importance of data curation, we have given more attention towards the chemical and biological data curation, which is a difficult task especially in case of big data-sets. Thus, to ease the curation process we have designed Konstanz Information Miner (KNIME) workflows, which are made available at http://teqip.jdvu.ac.in/QSAR_Tools/ . The developed models were appropriately validated based on the predictions for experiment derived data from test sets, as well as true external set compounds including known multi-target compounds. The domain of applicability for each classification model was checked based on a confidence estimation approach. Further, these validated models were employed for screening of natural compounds collected from the InterBioScreen natural database ( https://www.ibscreen.com/natural-compounds ). Further, the natural compounds that were categorized as 'actives' in at least two classification models out of five developed models were considered as multi-target leads, and these compounds were further screened using the drug-like filter, molecular docking technique and then thoroughly analyzed using molecular dynamics studies. Finally, the most potential multi-target natural compounds against AD are suggested.
Practicing pathology in the era of big data and personalized medicine.
Gu, Jiang; Taylor, Clive R
2014-01-01
The traditional task of the pathologist is to assist physicians in making the correct diagnosis of diseases at the earliest possible stage to effectuate the optimal treatment strategy for each individual patient. In this respect surgical pathology (the traditional tissue diagnosis) is but a tool. It is not, of itself, the purpose of pathology practice; and change is in the air. This January 2014 issue of Applied Immunohistochemistry and Molecular Morphology (AIMM) embraces that change by the incorporation of the agenda and content of the journal Diagnostic Molecular Morphology (DMP). Over a decade ago AIMM introduced and promoted the concept of "molecular morphology," and has sought to publish molecular studies that correlate with the morphologic features that continue to define cancer and many diseases. That intent is now reinforced and extended by the merger with DMP, as a logical and timely response to the growing impact of a wide range of genetic and molecular technologies that are beginning to reshape the way in which pathology is practiced. The use of molecular and genomic techniques already demonstrates clear value in the diagnosis of disease, with treatment tailored specifically to individual patients. Personalized medicine is the future, and personalized medicine demands personalized pathology. The need for integration of the flood of new molecular data, with surgical pathology, digital pathology, and the full range of pathology data in the electronic medical record has never been greater. This review describes the possible impact of these pressures upon the discipline of pathology, and examines possible outcomes. There is a sense of excitement and adventure. Active adaption and innovation are required. The new AIMM, incorporating DMP, seeks to position itself for a central role in this process.
Practicing Pathology in the Era of Big Data and Personalized Medicine
Gu, Jiang; Taylor, Clive R.; Phil, D
2014-01-01
The traditional task of the pathologist is to assist physicians in making the correct diagnosis of diseases at the earliest possible stage to effectuate the optimal treatment strategy for each individual patient. In this respect surgical pathology (the traditional tissue diagnosis) is but a tool. It is not, of itself, the purpose of pathology practice; and change is in the air. This January 2014 issue of Applied Immunohistochemistry and Molecular Morphology (AIMM) embraces that change by the incorporation of the agenda and content of the journal Diagnostic Molecular Morphology (DMP). Over a decade ago AIMM introduced and promoted the concept of “molecular morphology,” and has sought to publish molecular studies that correlate with the morphologic features that continue to define cancer and many diseases. That intent is now reinforced and extended by the merger with DMP, as a logical and timely response to the growing impact of a wide range of genetic and molecular technologies that are beginning to reshape the way in which pathology is practiced. The use of molecular and genomic techniques already demonstrates clear value in the diagnosis of disease, with treatment tailored specifically to individual patients. Personalized medicine is the future, and personalized medicine demands personalized pathology. The need for integration of the flood of new molecular data, with surgical pathology, digital pathology, and the full range of pathology data in the electronic medical record has never been greater. This review describes the possible impact of these pressures upon the discipline of pathology, and examines possible outcomes. There is a sense of excitement and adventure. Active adaption and innovation are required. The new AIMM, incorporating DMP, seeks to position itself for a central role in this process. PMID:24326463
2013-02-01
Kathmandu, Nepal 5a. CONTRACT NUMBER W911NF-12-1-0282 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER ...In the past, big earthquakes in Nepal (see Figure 1.1) have caused a huge number of casualties and damage to structures. The Great Nepal -Bihar...UBC Earthquake Engineering Research Facility 2235 East Mall, Vancouver, BC, Canada V6T 1Z4 Phone : 604 822-6203 Fax: 604 822-6901 E-mail
Data surrounding the needs of human disease and toxicity modeling are largely siloed limiting the ability to extend and reuse modules across knowledge domains. Using an infrastructure that supports integration across knowledge domains (animal toxicology, high-throughput screening...
Parkinson's Disease and Dopaminergic Therapy--Differential Effects on Movement, Reward and Cognition
ERIC Educational Resources Information Center
Rowe, J. B.; Hughes, L.; Ghosh, B. C. P.; Eckstein, D.; Williams-Gray, C. H.; Fallon, S.; Barker, R. A.; Owen, A. M.
2008-01-01
Cognitive deficits are very common in Parkinson's disease particularly for "executive functions" associated with frontal cortico-striatal networks. Previous work has identified deficits in tasks that require attentional control like task-switching, and reward-based tasks like gambling or reversal learning. However, there is a complex…
76 FR 55394 - Meeting of the Task Force on Community Preventive Services
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the...), Department of Health and Human Services (HHS). ACTION: Notice of meeting. SUMMARY: The Centers for Disease... (Task Force). The Task Force--an independent, nonfederal body of nationally known leaders in public...
76 FR 4115 - Meeting of the Task Force on Community Preventive Services
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-24
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the...), Department of Health and Human Services (HHS). ACTION: Notice of meeting. SUMMARY: The Centers for Disease... (Task Force). The Task Force--an independent, nonfederal body of nationally known leaders in public...
75 FR 63846 - Meeting of the Task Force on Community Preventive Services
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the...), Department of Health and Human Services (HHS). ACTION: Notice of meeting. SUMMARY: The Centers for Disease... (Task Force). The Task Force is an independent, nonfederal body of nationally known leaders in public...
Lee, Ing-Ming; Davis, Robert E.; DeWitt, Natalie D.
1990-01-01
DNA fragments of tomato big bud (BB) mycoplasmalike organism (MLO) in diseased periwinkle plants (Catharanthus roseus L.) were cloned to pSP6 plasmid vectors and amplified in Escherichia coli JM83. A nonradioactive method was developed and used to screen for MLO-specific recombinants. Cloned DNA probes were prepared by nick translation of the MLO recombinant plasmids by using biotinylated nucleotides. The probes all hybridized with nucleic acid from BB MLO-infected, but not healthy, plants. Results from dot hybridization analyses indicated that several MLOs, e.g., those of Italian tomato big bud, periwinkle little leaf, and clover phyllody, are closely related to BB MLO. The Maryland strain of aster yellows and maize bushy stunt MLOs are also related to BB MLO. Among the remaining MLOs used in this study, Vinca virescence and elm yellows MLOs may be very distantly related, if at all, to BB MLO. Potato witches' broom, clover proliferation, ash yellows, western X, and Canada X MLOs are distantly related to BB MLO. Southern hybridization analyses revealed that BB MLO contains extrachromosomal DNA that shares sequence homologies with extrachromosomal DNAs from aster yellows and periwinkle little leaf MLOs. Images PMID:16348195
Potential of big data analytics in the French in vitro diagnostics market.
Dubois, Nicolas; Garnier, Nicolas; Meune, Christophe
2017-12-01
The new paradigm of the big data raises many expectations, particularly in the field of health. Curiously, even though medical biology laboratories generate a great amount of data, the opportunities offered by this new field are poorly documented. For better understanding the clinical context of chronical disease follow-up, for leveraging preventive and/or personalized medicine, the contribution of big data analytics seems very promising. It is within this framework that we have explored to use data of a Breton group of laboratories of medical biology to analyze the possible contributions of their exploitation in the improvement of the clinical practices and to anticipate the evolution of pathologies for the benefit of patients. We report here three practical applications derived from routine laboratory data from a period of 5 years (February 2010-August 2015): follow-up of patients treated with AVK according to the recommendations of the High authority of health (HAS), use of the new troponin markers HS and NT-proBNP in cardiology. While the risks and difficulties of using algorithms in the health domain should not be underestimated - quality, accessibility, and protection of personal data in particular - these first results show that use of tools and technologies of the big data repository could provide decisive support for the concept of "evidence based medicine".
Big data in health care: using analytics to identify and manage high-risk and high-cost patients.
Bates, David W; Saria, Suchi; Ohno-Machado, Lucila; Shah, Anand; Escobar, Gabriel
2014-07-01
The US health care system is rapidly adopting electronic health records, which will dramatically increase the quantity of clinical data that are available electronically. Simultaneously, rapid progress has been made in clinical analytics--techniques for analyzing large quantities of data and gleaning new insights from that analysis--which is part of what is known as big data. As a result, there are unprecedented opportunities to use big data to reduce the costs of health care in the United States. We present six use cases--that is, key examples--where some of the clearest opportunities exist to reduce costs through the use of big data: high-cost patients, readmissions, triage, decompensation (when a patient's condition worsens), adverse events, and treatment optimization for diseases affecting multiple organ systems. We discuss the types of insights that are likely to emerge from clinical analytics, the types of data needed to obtain such insights, and the infrastructure--analytics, algorithms, registries, assessment scores, monitoring devices, and so forth--that organizations will need to perform the necessary analyses and to implement changes that will improve care while reducing costs. Our findings have policy implications for regulatory oversight, ways to address privacy concerns, and the support of research on analytics. Project HOPE—The People-to-People Health Foundation, Inc.
Alcohol and the pancreas. II. Pancreatic morphology of advanced alcoholic pancreatitis.
Noronha, M; Bordalo, O; Dreiling, D A
1981-08-01
The histopathology of advanced chronic alcoholic pancreatitis is dominated by cellular degeneration, atrophy and fibrosis. Sequential changes in the histopathology of alcoholic pancreatic disease has been defined and traced from initial injury to end-stage disease. These sequential histopathologies have been correlated with clinical syndrome and secretory patterns. The data are more consistent with a toxic-metabolic pathogenesis of alcoholic pancreatitis than the previous Big Duct and Small Duct hypotheses.
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
Healthy and wellbeing activities' promotion using a Big Data approach.
Gachet Páez, Diego; de Buenaga Rodríguez, Manuel; Puertas Sánz, Enrique; Villalba, María Teresa; Muñoz Gil, Rafael
2018-06-01
The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to health care; it is then desirable to optimize the costs of public and private healthcare systems, reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this article, we describe a system for promoting an active and healthy lifestyle for people and to recommend with guidelines and valuable information about their habits. The proposed system is being developed around the Big Data paradigm using bio-signal sensors and machine-learning algorithms for recommendations.
Social Media, Big Data, and Mental Health: Current Advances and Ethical Implications
Conway, Mike; O’Connor, Daniel
2016-01-01
Mental health (including substance abuse) is the fifth greatest contributor to the global burden of disease, with an economic cost estimated to be US $2.5 trillion in 2010, and expected to double by 2030. Developing information systems to support and strengthen population-level mental health monitoring forms a core part of the World Health Organization’s Comprehensive Action Plan 2013–2020. In this paper, we review recent work that utilizes social media “big data” in conjunction with associated technologies like natural language processing and machine learning to address pressing problems in population-level mental health surveillance and research, focusing both on technological advances and core ethical challenges. PMID:27042689
Wildfire and forest disease interaction lead to greater loss of soil nutrients and carbon.
Cobb, Richard C; Meentemeyer, Ross K; Rizzo, David M
2016-09-01
Fire and forest disease have significant ecological impacts, but the interactions of these two disturbances are rarely studied. We measured soil C, N, Ca, P, and pH in forests of the Big Sur region of California impacted by the exotic pathogen Phytophthora ramorum, cause of sudden oak death, and the 2008 Basin wildfire complex. In Big Sur, overstory tree mortality following P. ramorum invasion has been extensive in redwood and mixed evergreen forests, where the pathogen kills true oaks and tanoak (Notholithocarpus densiflorus). Sampling was conducted across a full-factorial combination of disease/no disease and burned/unburned conditions in both forest types. Forest floor organic matter and associated nutrients were greater in unburned redwood compared to unburned mixed evergreen forests. Post-fire element pools were similar between forest types, but lower in burned-invaded compared to burned-uninvaded plots. We found evidence disease-generated fuels led to increased loss of forest floor C, N, Ca, and P. The same effects were associated with lower %C and higher PO4-P in the mineral soil. Fire-disease interactions were linear functions of pre-fire host mortality which was similar between the forest types. Our analysis suggests that these effects increased forest floor C loss by as much as 24.4 and 21.3 % in redwood and mixed evergreen forests, respectively, with similar maximum losses for the other forest floor elements. Accumulation of sudden oak death generated fuels has potential to increase fire-related loss of soil nutrients at the region-scale of this disease and similar patterns are likely in other forests, where fire and disease overlap.
Ansai, Juliana H; Andrade, Larissa P; Rossi, Paulo G; Takahashi, Anielle C M; Vale, Francisco A C; Rebelatto, José R
Studies with functional and applicable methods and new cognitive demands involving executive function are needed to improve screening, prevention and rehabilitation of cognitive impairment and falls. to identify differences in gait, dual task performances, and history of falls between elderly people with preserved cognition, mild cognitive impairment and mild Alzheimer's disease. A cross-sectional study was conducted. The sample consisted of 40 community-dwelling older adults with preserved cognition, 40 older adults with mild cognitive impairment, and 38 older adults with mild Alzheimer's disease. The assessment consisted of anamneses, gait (measured by the 10-meter walk test), dual task (measured by the Timed Up and Go Test associated with the motor-cognitive task of calling a phone number), and history of falls in the past year. There were no differences among all groups for all variables. However, the Alzheimer's disease Group performed significantly worse in the dual task than the other groups. No item of dual task could distinguish people with preserved cognition from those with mild cognitive impairment. The groups with cognitive impairment included more fallers, and specific characteristics in history of falls between groups were identified. Dual task could distinguish Alzheimer's disease patients specifically from other cognitive profiles. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
What you say matters: exploring visual-verbal interactions in visual working memory.
Mate, Judit; Allen, Richard J; Baqués, Josep
2012-01-01
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.
Taib, Mohd Firdaus Mohd; Bahn, Sangwoo; Yun, Myung Hwan
2016-06-27
The popularity of mobile computing products is well known. Thus, it is crucial to evaluate their contribution to musculoskeletal disorders during computer usage under both comfortable and stressful environments. This study explores the effect of different computer products' usages with different tasks used to induce psychosocial stress on muscle activity. Fourteen male subjects performed computer tasks: sixteen combinations of four different computer products with four different tasks used to induce stress. Electromyography for four muscles on the forearm, shoulder and neck regions and task performances were recorded. The increment of trapezius muscle activity was dependent on the task used to induce the stress where a higher level of stress made a greater increment. However, this relationship was not found in the other three muscles. Besides that, compared to desktop and laptop use, the lowest activity for all muscles was obtained during the use of a tablet or smart phone. The best net performance was obtained in a comfortable environment. However, during stressful conditions, the best performance can be obtained using the device that a user is most comfortable with or has the most experience with. Different computer products and different levels of stress play a big role in muscle activity during computer work. Both of these factors must be taken into account in order to reduce the occurrence of musculoskeletal disorders or problems.
Report of the New England Task Force on Reducing Heart Disease and Stroke Risk.
Havas, S; Wozenski, S; Deprez, R; Miller, L; Charman, R; Hamrell, M; Green, L; Benn, S
1989-01-01
Five years ago, a task force on reducing risk for heart disease and stroke was established by the six New England States. The task force included representatives from State public health departments, academia, the corporate sector, and voluntary organizations. This article is the final report of the task force. Heart disease and cerebrovascular disease are major causes of mortality in the New England region. Heart disease causes nearly 40 percent of all deaths in each of the six States and cerebrovascular disease, 7 percent of the deaths. Major risk factors for ischemic heart disease that have been identified--elevated serum cholesterol, high blood pressure, and cigarette smoking--are caused largely by lifestyle behaviors. Similarly, cerebrovascular disease results largely from uncontrolled high blood pressure, much of which is attributable to unhealthy lifestyle behaviors. In a series of studies evidence has accumulated that the reduction or elimination of these risk factors results in a decline in mortality rates. Many intervention programs have been mounted in the region, but there has been no population-wide effort to attack these risk factors. The task force proposed a broad range of activities for New Englanders at sites in the community and in health facilities. These activities would promote not smoking, exercising regularly, and maintaining desirable levels of serum cholesterol and blood pressure. PMID:2495547
The Iowa Gambling Task in Parkinson's disease: A meta-analysis on effects of disease and medication.
Evens, Ricarda; Hoefler, Michael; Biber, Karolina; Lueken, Ulrike
2016-10-01
Decision-making under uncertainty as measured by the Iowa Gambling Task has frequently been studied in Parkinson's disease. The dopamine overdose hypothesis assumes that dopaminergic effects follow an inverted U-shaped function, restoring some cognitive functions while overdosing others. The present work quantitatively summarizes disease and medication effects on task performance and evaluates evidence for the dopamine overdose hypothesis of impaired decision-making in Parkinson's disease. A systematic literature search was performed to identify studies examining the Iowa Gambling Task in patients with Parkinson's disease. Outcomes were quantitatively combined, with separate estimates for the clinical (patients ON medication vs. healthy controls), disease (patients OFF medication vs. healthy controls), and medication effects (patients ON vs. OFF medication). Furthermore, using meta-regression analysis it was explored whether the study characteristics drug level, disease duration, and motor symptoms explained heterogeneous performance between studies. Patients with Parkinson's disease ON dopaminergic medication showed significantly impaired Iowa Gambling Task performance compared to healthy controls. This impairment was not normalized by short-term withdrawal of medication. Heterogeneity across studies was not explained by dopaminergic drug levels, disease durations or motor symptoms. While this meta-analysis showed significantly impaired decision-making performance in Parkinson's disease, there was no evidence that this impairment was related to dopamine overdosing. However, only very few studies assessed patients OFF medication and future studies are needed to concentrate on the modulation of dopaminergic drug levels and pay particular attention to problems related to repeated testing. Furthermore, short- vs. long-term medication effects demand further in-depth investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gonçalves, Jessica; Ansai, Juliana Hotta; Masse, Fernando Arturo Arriagada; Vale, Francisco Assis Carvalho; Takahashi, Anielle Cristhine de Medeiros; Andrade, Larissa Pires de
2018-04-04
A dual-task tool with a challenging and daily secondary task, which involves executive functions, could facilitate the screening for risk of falls in older people with mild cognitive impairment or mild Alzheimer's disease. To verify if a motor-cognitive dual-task test could predict falls in older people with mild cognitive impairment or mild Alzheimer's disease, and to establish cutoff scores for the tool for both groups. A prospective study was conducted with community-dwelling older adults, including 40 with mild cognitive impairment and 38 with mild Alzheimer's disease. The dual-task test consisted of the Timed up and Go Test associated with a motor-cognitive task using a phone to call. Falls were recorded during six months by calendar and monthly telephone calls and the participants were categorized as fallers or non-fallers. In the Mild cognitive impairment Group, fallers presented higher values in time (35.2s), number of steps (33.7 steps) and motor task cost (116%) on dual-task compared to non-fallers. Time, number of steps and motor task cost were significantly associated with falls in people with mild cognitive impairment. Multivariate analysis identified higher number of steps spent on the test to be independently associated with falls. A time greater than 23.88s (sensitivity=80%; specificity=61%) and a number of steps over 29.50 (sensitivity=65%; specificity=83%) indicated prediction of risk of falls in the Mild cognitive impairment Group. Among people with Alzheimer's disease, no differences in dual-task between fallers and non-fallers were found and no variable of the tool was able to predict falls. The dual-task predicts falls only in older people with mild cognitive impairment. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
ERIC Educational Resources Information Center
Kehagia, Angie A.; Cools, Roshan; Barker, Roger A.; Robbins, Trevor W.
2009-01-01
This study sought to disambiguate the impact of Parkinson's disease (PD) on cognitive control as indexed by task set switching, by addressing discrepancies in the literature pertaining to disease severity and paradigm heterogeneity. A task set is governed by a rule that determines how relevant stimuli (stimulus set) map onto specific responses…
Joshi, Rohina; Alim, Mohammed; Kengne, Andre Pascal; Jan, Stephen; Maulik, Pallab K; Peiris, David; Patel, Anushka A
2014-01-01
One potential solution to limited healthcare access in low and middle income countries (LMIC) is task-shifting- the training of non-physician healthcare workers (NPHWs) to perform tasks traditionally undertaken by physicians. The aim of this paper is to conduct a systematic review of studies involving task-shifting for the management of non-communicable disease (NCD) in LMIC. A search strategy with the following terms "task-shifting", "non-physician healthcare workers", "community healthcare worker", "hypertension", "diabetes", "cardiovascular disease", "mental health", "depression", "chronic obstructive pulmonary disease", "respiratory disease", "cancer" was conducted using Medline via Pubmed and the Cochrane library. Two reviewers independently reviewed the databases and extracted the data. Our search generated 7176 articles of which 22 were included in the review. Seven studies were randomised controlled trials and 15 were observational studies. Tasks performed by NPHWs included screening for NCDs and providing primary health care. The majority of studies showed improved health outcomes when compared with usual healthcare, including reductions in blood pressure, increased uptake of medications and lower depression scores. Factors such as training of NPHWs, provision of algorithms and protocols for screening, treatment and drug titration were the main enablers of the task-shifting intervention. The main barriers identified were restrictions on prescribing medications and availability of medicines. Only two studies described cost-effective analyses, both of which demonstrated that task-shifting was cost-effective. Task-shifting from physicians to NPHWs, if accompanied by health system re-structuring is a potentially effective and affordable strategy for improving access to healthcare for NCDs. Since the majority of study designs reviewed were of inadequate quality, future research methods should include robust evaluations of such strategies.
Validation of a Behavioral Approach for Measuring Saccades in Parkinson's Disease.
Turner, Travis H; Renfroe, Jenna B; Duppstadt-Delambo, Amy; Hinson, Vanessa K
2017-01-01
Speed and control of saccades are related to disease progression and cognitive functioning in Parkinson's disease (PD). Traditional eye-tracking complexities encumber application for individual evaluations and clinical trials. The authors examined psychometric properties of standalone tasks for reflexive prosaccade latency, volitional saccade initiation, and saccade inhibition (antisaccade) in a heterogeneous sample of 65 PD patients. Demographics had minimal impact on task performance. Thirty-day test-retest reliability estimates for behavioral tasks were acceptable and similar to traditional eye tracking. Behavioral tasks demonstrated concurrent validity with traditional eye-tracking measures; discriminant validity was less clear. Saccade initiation and inhibition discriminated PD patients with cognitive impairment. The present findings support further development and use of the behavioral tasks for assessing latency and control of saccades in PD.
A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine.
Duan, Mingxing; Li, Kenli; Liao, Xiangke; Li, Keqin
2018-06-01
As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs. In this paper, an efficient ELM based on the Spark framework (SELM), which includes three parallel subalgorithms, is proposed for big data classification. By partitioning the corresponding data sets reasonably, the hidden layer output matrix calculation algorithm, matrix decomposition algorithm, and matrix decomposition algorithm perform most of the computations locally. At the same time, they retain the intermediate results in distributed memory and cache the diagonal matrix as broadcast variables instead of several copies for each task to reduce a large amount of the costs, and these actions strengthen the learning ability of the SELM. Finally, we implement our SELM algorithm to classify large data sets. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, our SELM achieves an speedup on a cluster with ten nodes, and reaches a speedup with 15 nodes, an speedup with 20 nodes, a speedup with 25 nodes, a speedup with 30 nodes, and a speedup with 35 nodes.
ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.
Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping
2018-04-27
A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.
Always Gamble on an Empty Stomach: Hunger Is Associated with Advantageous Decision Making
de Ridder, Denise; Kroese, Floor; Adriaanse, Marieke; Evers, Catharine
2014-01-01
Three experimental studies examined the counterintuitive hypothesis that hunger improves strategic decision making, arguing that people in a hot state are better able to make favorable decisions involving uncertain outcomes. Studies 1 and 2 demonstrated that participants with more hunger or greater appetite made more advantageous choices in the Iowa Gambling Task compared to sated participants or participants with a smaller appetite. Study 3 revealed that hungry participants were better able to appreciate future big rewards in a delay discounting task; and that, in spite of their perception of increased rewarding value of both food and monetary objects, hungry participants were not more inclined to take risks to get the object of their desire. Together, these studies for the first time provide evidence that hot states improve decision making under uncertain conditions, challenging the conventional conception of the detrimental role of impulsivity in decision making. PMID:25340399
Always gamble on an empty stomach: hunger is associated with advantageous decision making.
de Ridder, Denise; Kroese, Floor; Adriaanse, Marieke; Evers, Catharine
2014-01-01
Three experimental studies examined the counterintuitive hypothesis that hunger improves strategic decision making, arguing that people in a hot state are better able to make favorable decisions involving uncertain outcomes. Studies 1 and 2 demonstrated that participants with more hunger or greater appetite made more advantageous choices in the Iowa Gambling Task compared to sated participants or participants with a smaller appetite. Study 3 revealed that hungry participants were better able to appreciate future big rewards in a delay discounting task; and that, in spite of their perception of increased rewarding value of both food and monetary objects, hungry participants were not more inclined to take risks to get the object of their desire. Together, these studies for the first time provide evidence that hot states improve decision making under uncertain conditions, challenging the conventional conception of the detrimental role of impulsivity in decision making.
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
Meyer, Luisa A.; Johnson, Michael G.; Cullen, Diane M.; Vivanco, Juan F.; Blank, Robert D.; Ploeg, Heidi-Lynn; Smith, Everett L.
2016-01-01
Increased bone formation resulting from mechanical loading is well documented; however, the interactions of the mechanotransduction pathways are less well understood. Endothelin-1, a ubiquitous autocrine/paracrine signaling molecule promotes osteogenesis in metastatic disease. In the present study, it was hypothesized that exposure to big endothelin-1 (big ET1) and/or mechanical loading would promote osteogenesis in ex vivo trabecular bone cores. In a 2×2 factorial trial of daily mechanical loading (−2000 με, 120 cycles daily, “jump” waveform) and big ET1 (25 ng/mL), 48 bovine sternal trabecular bone cores were maintained in bioreactor chambers for 23 days. The bone cores’ response to the treatment stimuli was assessed with percent change in core apparent elastic modulus (ΔEapp), static and dynamic histomorphometry, and prostaglandin E2 (PGE2) secretion. Two-way ANOVA with a post hoc Fisher’s LSD test found no significant treatment effects on ΔEapp (p=0.25 and 0.51 for load and big ET1, respectively). The ΔEapp in the “no load + big ET1” (CE, 13±12.2%, p=0.56), “load + no big ET1” (LC, 17±3.9%, p=0.14) and “load + big ET1” (LE, 19±4.2%, p=0.13) treatment groups were not statistically different than the control group (CC, 3.3%±8.6%). Mineralizing surface (MS/BS), mineral apposition (MAR) and bone formation rates (BFR/BS) were significantly greater in LE than CC (p=0.037, 0.0040 and 0.019, respectively). While the histological bone formation markers in LC trended to be greater than CC (p=0.055, 0.11 and 0.074, respectively) there was no difference between CE and CC (p=0.61, 0.50 and 0.72, respectively). Cores in LE and LC had more than 50% greater MS/BS (p=0.037, p=0.055 respectively) and MAR (p=0.0040, p=0.11 respectively) than CC. The BFR/BS was more than two times greater in LE (p=0.019) and LC (p=0.074) than CC. The PGE2 levels were elevated at 8 days post-osteotomy in all groups and the treatment groups remained elevated compared to the CC group on days 15, 19 and 23. The data suggest that combined exposure to big ET1 and mechanical loading results in increased osteogenesis as measured in biomechanical, histomorphometric and biochemical responses. PMID:26855374
Meyer, Luisa A; Johnson, Michael G; Cullen, Diane M; Vivanco, Juan F; Blank, Robert D; Ploeg, Heidi-Lynn; Smith, Everett L
2016-04-01
Increased bone formation resulting from mechanical loading is well documented; however, the interactions of the mechanotransduction pathways are less well understood. Endothelin-1, a ubiquitous autocrine/paracrine signaling molecule promotes osteogenesis in metastatic disease. In the present study, it was hypothesized that exposure to big endothelin-1 (big ET1) and/or mechanical loading would promote osteogenesis in ex vivo trabecular bone cores. In a 2×2 factorial trial of daily mechanical loading (-2000με, 120cycles daily, "jump" waveform) and big ET1 (25ng/mL), 48 bovine sternal trabecular bone cores were maintained in bioreactor chambers for 23days. The bone cores' response to the treatment stimuli was assessed with percent change in core apparent elastic modulus (ΔEapp), static and dynamic histomorphometry, and prostaglandin E2 (PGE2) secretion. Two-way ANOVA with a post hoc Fisher's LSD test found no significant treatment effects on ΔEapp (p=0.25 and 0.51 for load and big ET1, respectively). The ΔEapp in the "no load + big ET1" (CE, 13±12.2%, p=0.56), "load + no big ET1" (LC, 17±3.9%, p=0.14) and "load + big ET1" (LE, 19±4.2%, p=0.13) treatment groups were not statistically different than the control group (CC, 3.3%±8.6%). Mineralizing surface (MS/BS), mineral apposition (MAR) and bone formation rates (BFR/BS) were significantly greater in LE than CC (p=0.037, 0.0040 and 0.019, respectively). While the histological bone formation markers in LC trended to be greater than CC (p=0.055, 0.11 and 0.074, respectively) there was no difference between CE and CC (p=0.61, 0.50 and 0.72, respectively). Cores in LE and LC had more than 50% greater MS/BS (p=0.037, p=0.055 respectively) and MAR (p=0.0040, p=0.11 respectively) than CC. The BFR/BS was more than two times greater in LE (p=0.019) and LC (p=0.074) than CC. The PGE2 levels were elevated at 8days post-osteotomy in all groups and the treatment groups remained elevated compared to the CC group on days 15, 19 and 23. The data suggest that combined exposure to big ET1 and mechanical loading results in increased osteogenesis as measured in biomechanical, histomorphometric and biochemical responses. Copyright © 2016 Elsevier Inc. All rights reserved.
78 FR 24718 - Nez Perce-Clearwater National Forests; Idaho; Lolo Insect & Disease Project
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... portions of the project area. Road decommissioning, culvert replacements, road improvements, and soils... to be cost-effective and provide maximum protection of soil and water quality. Big game, primarily... watershed restoration in the Lolo Creek drainage is associated with roads and soil improvement. Existing...
Planning and task management in Parkinson's disease: differential emphasis in dual-task performance.
Bialystok, Ellen; Craik, Fergus I M; Stefurak, Taresa
2008-03-01
Seventeen patients diagnosed with Parkinson's disease completed a complex computer-based task that involved planning and management while also performing an attention-demanding secondary task. The tasks were performed concurrently, but it was necessary to switch from one to the other. Performance was compared to a group of healthy age-matched control participants and a group of young participants. Parkinson's patients performed better than the age-matched controls on almost all measures and as well as the young controls in many cases. However, the Parkinson's patients achieved this by paying relatively less attention to the secondary task and focusing attention more on the primary task. Thus, Parkinson's patients can apparently improve their performance on some aspects of a multidimensional task by simplifying task demands. This benefit may occur as a consequence of their inflexible exaggerated attention to some aspects of a complex task to the relative neglect of other aspects.
BioCreative V CDR task corpus: a resource for chemical disease relation extraction.
Li, Jiao; Sun, Yueping; Johnson, Robin J; Sciaky, Daniela; Wei, Chih-Hsuan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Wiegers, Thomas C; Lu, Zhiyong
2016-01-01
Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community.Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.
76 FR 30722 - Meeting of the Task Force on Community Preventive Services
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-26
... cardiovascular disease and tobacco will also be discussed. Meeting Accessibility: This meeting is open to the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Meeting of the Task Force on Community Preventive Services AGENCY: Centers for Disease Control and Prevention (CDC...
Machine learning in cardiovascular medicine: are we there yet?
Shameer, Khader; Johnson, Kipp W; Glicksberg, Benjamin S; Dudley, Joel T; Sengupta, Partho P
2018-01-19
Artificial intelligence (AI) broadly refers to analytical algorithms that iteratively learn from data, allowing computers to find hidden insights without being explicitly programmed where to look. These include a family of operations encompassing several terms like machine learning, cognitive learning, deep learning and reinforcement learning-based methods that can be used to integrate and interpret complex biomedical and healthcare data in scenarios where traditional statistical methods may not be able to perform. In this review article, we discuss the basics of machine learning algorithms and what potential data sources exist; evaluate the need for machine learning; and examine the potential limitations and challenges of implementing machine in the context of cardiovascular medicine. The most promising avenues for AI in medicine are the development of automated risk prediction algorithms which can be used to guide clinical care; use of unsupervised learning techniques to more precisely phenotype complex disease; and the implementation of reinforcement learning algorithms to intelligently augment healthcare providers. The utility of a machine learning-based predictive model will depend on factors including data heterogeneity, data depth, data breadth, nature of modelling task, choice of machine learning and feature selection algorithms, and orthogonal evidence. A critical understanding of the strength and limitations of various methods and tasks amenable to machine learning is vital. By leveraging the growing corpus of big data in medicine, we detail pathways by which machine learning may facilitate optimal development of patient-specific models for improving diagnoses, intervention and outcome in cardiovascular medicine. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
On the difficulty of defining disease: a Darwinian perspective.
Nesse, R M
2001-01-01
Most attempts to craft a definition of disease seem to have tackled two tasks simultaneously: 1) trying to create a series of inclusion and exclusion criteria that correspond to medical usage of the word disease and 2) using this definition to understand the essence of what disease is. The first task has been somewhat accomplished, but cannot reach closure because the concept of "disease" is based on a prototype, not a logical category. The second task cannot be accomplished by deduction, but only by understanding how the body works and what each component is for, in evolutionary detail. An evolutionary view of the origins of the body and its vulnerabilities that result in disease provides an objective foundation for recognizing pathology. Our social definition of disease will remain contentious, however, because values vary, and because the label "disease" changes judgments about the moral status of people with various conditions, and their rights to medical and social resources.
SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.
Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara
2014-12-01
Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines. © 2014 The Authors Genes to Cells © 2014 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2017-01-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2016-10-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.
Mercury Capsule Construction at the NASA Lewis Research Center
1959-08-21
A NASA mechanic secures the afterbody to a Mercury capsule in the hangar at the Lewis Research Center. The capsule was one of two built at Lewis for the “Big Joe” launches scheduled for September 1959. The initial phase of Project Mercury consisted of a series of unmanned launches using the Air Force’s Redstone and Atlas boosters and the Langley-designed Little Joe boosters. The first Atlas launch, referred to as “Big Joe”, was a single attempt early in Project Mercury to use a full-scale Atlas booster to simulate the reentry of a mock-up Mercury capsule without actually placing it in orbit. The overall design of Big Joe had been completed by December 1958, and soon thereafter project manager Aleck Bond assigned NASA Lewis the task of designing the electronic instrumentation and automatic stabilization system. Lewis also constructed the capsule’s lower section, which contained a pressurized area with the electronics and two nitrogen tanks for the retrorockets. Lewis technicians were responsible for assembling the entire capsule: the General Electric heatshield, NASA Langley afterbody and recovery canister, and Lewis electronics and control systems. On June 9, 1959, the capsule was loaded on an air force transport aircraft and flown to Cape Canaveral. A team of 45 test operations personnel from Lewis followed the capsule to Florida and spent the ensuing months preparing it for launch. The launch took place in the early morning hours of September 9, 1959.
Multi-Axis Space Inertia Test Facility inside the Altitude Wind Tunnel
1960-04-21
The Multi-Axis Space Test Inertial Facility (MASTIF) in the Altitude Wind Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Although the Mercury astronaut training and mission planning were handled by the Space Task Group at Langley Research Center, NASA Lewis played an important role in the program, beginning with the Big Joe launch. Big Joe was a singular attempt early in the program to use a full-scale Atlas booster and simulate the reentry of a mockup Mercury capsule without actually placing it in orbit. A unique three-axis gimbal rig was built inside Lewis’ Altitude Wind Tunnel to test Big Joe’s attitude controls. The control system was vital since the capsule would burn up on reentry if it were not positioned correctly. The mission was intended to assess the performance of the Atlas booster, the reliability of the capsule’s attitude control system and beryllium heat shield, and the capsule recovery process. The September 9, 1959 launch was a success for the control system and heatshield. Only a problem with the Atlas booster kept the mission from being a perfect success. The MASTIF was modified in late 1959 to train Project Mercury pilots to bring a spinning spacecraft under control. An astronaut was secured in a foam couch in the center of the rig. The rig then spun on three axes from 2 to 50 rotations per minute. Small nitrogen gas thrusters were used by the astronauts to bring the MASTIF under control.
Fernandes, Ângela; Rocha, Nuno; Santos, Rubim; Tavares, João Manuel R S
2015-01-01
The aim of this study was to analyze the efficacy of cognitive-motor dual-task training compared with single-task training on balance and executive functions in individuals with Parkinson's disease. Fifteen subjects, aged between 39 and 75 years old, were randomly assigned to the dual-task training group (n = 8) and single-task training group (n = 7). The training was run twice a week for 6 weeks. The single-task group received balance training and the dual-task group performed cognitive tasks simultaneously with the balance training. There were no significant differences between the two groups at baseline. After the intervention, the results for mediolateral sway with eyes closed were significantly better for the dual-task group and anteroposterior sway with eyes closed was significantly better for the single-task group. The results suggest superior outcomes for the dual-task training compared to the single-task training for static postural control, except in anteroposterior sway with eyes closed.
Wild, Lucia Bartmann; de Lima, Daiane Borba; Balardin, Joana Bisol; Rizzi, Luana; Giacobbo, Bruno Lima; Oliveira, Henrique Bianchi; de Lima Argimon, Irani Iracema; Peyré-Tartaruga, Leonardo Alexandre; Rieder, Carlos R M; Bromberg, Elke
2013-02-01
The primary purpose of this study was to investigate the effect of dual-tasking on cognitive performance and gait parameters in patients with idiopathic Parkinson's disease (PD) without dementia. The impact of cognitive task complexity on cognition and walking was also examined. Eighteen patients with PD (ages 53-88, 10 women; Hoehn and Yahr stage I-II) and 18 older adults (ages 61-84; 10 women) completed two neuropsychological measures of executive function/attention (the Stroop Test and Wisconsin Card Sorting Test). Cognitive performance and gait parameters related to functional mobility of stride were measured under single (cognitive task only) and dual-task (cognitive task during walking) conditions with different levels of difficulty and different types of stimuli. In addition, dual-task cognitive costs were calculated. Although cognitive performance showed no significant difference between controls and PD patients during single or dual-tasking conditions, only the patients had a decrease in cognitive performance during walking. Gait parameters of patients differed significantly from controls at single and dual-task conditions, indicating that patients gave priority to gait while cognitive performance suffered. Dual-task cognitive costs of patients increased with task complexity, reaching significantly higher values then controls in the arithmetic task, which was correlated with scores on executive function/attention (Stroop Color-Word Page). Baseline motor functioning and task executive/attentional load affect the performance of cognitive tasks of PD patients while walking. These findings provide insight into the functional strategies used by PD patients in the initial phases of the disease to manage dual-task interference.
Effect of aerobic exercise on physical performance in patients with Alzheimer's disease.
Sobol, Nanna Aue; Hoffmann, Kristine; Frederiksen, Kristian Steen; Vogel, Asmus; Vestergaard, Karsten; Brændgaard, Hans; Gottrup, Hanne; Lolk, Annette; Wermuth, Lene; Jakobsen, Søren; Laugesen, Lars; Gergelyffy, Robert; Høgh, Peter; Bjerregaard, Eva; Siersma, Volkert; Andersen, Birgitte Bo; Johannsen, Peter; Waldemar, Gunhild; Hasselbalch, Steen Gregers; Beyer, Nina
2016-12-01
Knowledge about the feasibility and effects of exercise programs to persons with Alzheimer's disease is lacking. This study investigated the effect of aerobic exercise on physical performance in community-dwelling persons with mild Alzheimer's disease. The single blinded multi-center RCT (ADEX) included 200 patients, median age 71 yrs (50-89). The intervention group received supervised moderate-to-high intensity aerobic exercise 1 hour × 3/week for 16 weeks. Assessments included cardiorespiratory fitness, single-task physical performance, dual-task performance and exercise self-efficacy. Significant between-group differences in change from baseline (mean [95%CI]) favored the intervention group for cardiorespiratory fitness (4.0 [2.3-5.8] ml/kg/min, P <0.0001) and exercise self-efficacy (1.7 [0.5-2.8] points, P =0.004). Furthermore, an exercise attendance of ≥66.6% resulted in significant positive effects on single-task physical performance and dual-task performance. Aerobic exercise has the potential to improve cardiorespiratory fitness, single-task physical performance, dual-task performance and exercise self-efficacy in community-dwelling patients with mild Alzheimer's disease. Copyright © 2016 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Verbal Fluency Performance in Patients with Non-demented Parkinson's Disease
Khatoonabadi, Ahmad Reza; Bakhtiyari, Jalal
2013-01-01
Objective While Parkinson's disease (PD) has traditionally been defined by motor symptoms, many researches have indicated that mild cognitive impairment is common in non-demented PD patients. The purpose of this study was to compare verbal fluency performance in non-demented Parkinson's disease patients with healthy controls. Method In this cross-sectional study thirty non-demented Parkinson's disease patients and 30 healthy controls, matched by age, gender and education, were compared on verbal fluency performance. Verbal fluency was studied with a Phonemic Fluency task using the letters F, A, and S, a semantic fluency task using the categories animals and fruits. The independent t-test was used for data analysis. Results Overall, participants generated more words in the semantic fluency task than in the phonemic fluency task. Results revealed significant differences between patients and controls in semantic fluency task (p<.05). In addition, PD patients showed a significant reduction of correctly generated words in letter fluency task. The total number of words produced was also significantly lower in the PD group (p<.05). Conclusion Verbal fluency disruption is implied in non-demented PD patients in association with incipient cognitive impairment. PMID:23682253
Exploiting Helminth-Host Interactomes through Big Data.
Sotillo, Javier; Toledo, Rafael; Mulvenna, Jason; Loukas, Alex
2017-11-01
Helminths facilitate their parasitic existence through the production and secretion of different molecules, including proteins. Some helminth proteins can manipulate the host's immune system, a phenomenon that is now being exploited with a view to developing therapeutics for inflammatory diseases. In recent years, hundreds of helminth genomes have been sequenced, but as a community we are still taking baby steps when it comes to identifying proteins that govern host-helminth interactions. The information generated from genomic, immunomic, and proteomic studies, as well as from cutting-edge approaches such as proteogenomics, is leading to a substantial volume of big data that can be utilised to shed light on fundamental biology and provide solutions for the development of bioactive-molecule-based therapeutics. Copyright © 2017 Elsevier Ltd. All rights reserved.
References for Haplotype Imputation in the Big Data Era
Li, Wenzhi; Xu, Wei; Li, Qiling; Ma, Li; Song, Qing
2016-01-01
Imputation is a powerful in silico approach to fill in those missing values in the big datasets. This process requires a reference panel, which is a collection of big data from which the missing information can be extracted and imputed. Haplotype imputation requires ethnicity-matched references; a mismatched reference panel will significantly reduce the quality of imputation. However, currently existing big datasets cover only a small number of ethnicities, there is a lack of ethnicity-matched references for many ethnic populations in the world, which has hampered the data imputation of haplotypes and its downstream applications. To solve this issue, several approaches have been proposed and explored, including the mixed reference panel, the internal reference panel and genotype-converted reference panel. This review article provides the information and comparison between these approaches. Increasing evidence showed that not just one or two genetic elements dictate the gene activity and functions; instead, cis-interactions of multiple elements dictate gene activity. Cis-interactions require the interacting elements to be on the same chromosome molecule, therefore, haplotype analysis is essential for the investigation of cis-interactions among multiple genetic variants at different loci, and appears to be especially important for studying the common diseases. It will be valuable in a wide spectrum of applications from academic research, to clinical diagnosis, prevention, treatment, and pharmaceutical industry. PMID:27274952
Lenert, L.; Lopez-Campos, G.
2014-01-01
Summary Objectives Given the quickening speed of discovery of variant disease drivers from combined patient genotype and phenotype data, the objective is to provide methodology using big data technology to support the definition of deep phenotypes in medical records. Methods As the vast stores of genomic information increase with next generation sequencing, the importance of deep phenotyping increases. The growth of genomic data and adoption of Electronic Health Records (EHR) in medicine provides a unique opportunity to integrate phenotype and genotype data into medical records. The method by which collections of clinical findings and other health related data are leveraged to form meaningful phenotypes is an active area of research. Longitudinal data stored in EHRs provide a wealth of information that can be used to construct phenotypes of patients. We focus on a practical problem around data integration for deep phenotype identification within EHR data. The use of big data approaches are described that enable scalable markup of EHR events that can be used for semantic and temporal similarity analysis to support the identification of phenotype and genotype relationships. Conclusions Stead and colleagues’ 2005 concept of using light standards to increase the productivity of software systems by riding on the wave of hardware/processing power is described as a harbinger for designing future healthcare systems. The big data solution, using flexible markup, provides a route to improved utilization of processing power for organizing patient records in genotype and phenotype research. PMID:25123744
Inflammation Thread Runs across Medical Laboratory Specialities.
Nydegger, Urs; Lung, Thomas; Risch, Lorenz; Risch, Martin; Medina Escobar, Pedro; Bodmer, Thomas
2016-01-01
We work on the assumption that four major specialities or sectors of medical laboratory assays, comprising clinical chemistry, haematology, immunology, and microbiology, embraced by genome sequencing techniques, are routinely in use. Medical laboratory markers for inflammation serve as model: they are allotted to most fields of medical lab assays including genomics. Incessant coding of assays aligns each of them in the long lists of big data. As exemplified with the complement gene family, containing C2, C3, C8A, C8B, CFH, CFI, and ITGB2, heritability patterns/risk factors associated with diseases with genetic glitch of complement components are unfolding. The C4 component serum levels depend on sufficient vitamin D whilst low vitamin D is inversely related to IgG1, IgA, and C3 linking vitamin sufficiency to innate immunity. Whole genome sequencing of microbial organisms may distinguish virulent from nonvirulent and antibiotic resistant from nonresistant varieties of the same species and thus can be listed in personal big data banks including microbiological pathology; the big data warehouse continues to grow.
Partial achilles tendon rupture presenting with giant hematoma; MRI findings of 4 year follow up.
Sarsilmaz, Aysegul; Varer, Makbule; Coskun, Gulten; Apaydın, Melda; Oyar, Orhan
2011-12-01
In the young population, spontaneous rupture of Achilles tendon is very rare. The big hematoma is also rare finding of the Achilles tendon partial rupture. It is usually seen with complete rupture. We presented imaging findings of 4 years follow up of the spontaneous partial rupture of Achilles tendon presenting with giant expanding hematoma and mimicking complete rupture radiologically. We discussed the alterations of tendon signal intensity and result of conservative therapy after partial rupture with big hematoma in the long term. A 29 year-old man, applied with pain and swelling in the retrocalcaneal region of left ankle. He did not have chronic metabolic disease. He was not active in physical activities. X-ray radiograms were normal. At magnetic resonance images (MRI), there was an intratendinous big hematoma, subcutanous fat planes were edematous around tendon. The diagnosis was partial rupture and giant hematoma. Hematoma was drained. The conservative treatment was applied and his complaints disappeared. After treatment, approximately 4 years later, control MRI showed thickened and hypointense tendon in all images. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Mohamadi Hasel, Kurosh; Besharat, Mohamad Ali; Abdolhoseini, Amir; Alaei Nasab, Somaye; Niknam, Seyran
2013-06-01
The objective of this study is to examine relationships of hardiness and big five personality factors to depression, perceived stress, and oral lichen planus (OLP) severity. Sixty Iranian patients with oral lichen planus completed measures of perceived stress, hardiness, big five, and depression. Linear regressions revealed that control and challenge significantly predicted least perceived stress. On the contrary, big five factor of neuroticism predicted more perceived stress. Furthermore, control, commitment, and extraversion negatively predicted depression levels, but neuroticism positively predicted depression levels. Additionally, more levels of the challenge factor predicted fewer OLP scores while more levels of perceived stress predicted more OLP scores. The components of control challenge and neuroticism factors had a significant role in predicting perceived stress. On the other hand, the components of control and commitment and extraversion factors had a prominent role in predicting depression in patients with OLP, so personality constructs may have an effective role in triggering experience of stress, depression, and OLP itself. Additionally, interventions that enhance individual protective factors may be beneficial in reducing stress and depression in some severe diseases.
BRIC Health Systems and Big Pharma: A Challenge for Health Policy and Management.
Rodwin, Victor G; Fabre, Guilhem; Ayoub, Rafael F
2018-01-02
BRIC nations - Brazil, Russia, India, and China - represent 40% of the world's population, including a growing aging population and middle class with an increasing prevalence of chronic disease. Their healthcare systems increasingly rely on prescription drugs, but they differ from most other healthcare systems because healthcare expenditures in BRIC nations have exhibited the highest revenue growth rates for pharmaceutical multinational corporations (MNCs), Big Pharma. The response of BRIC nations to Big Pharma presents contrasting cases of how governments manage the tensions posed by rising public expectations and limited resources to satisfy them. Understanding these tensions represents an emerging area of research and an important challenge for all those who work in the field of health policy and management (HPAM). © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Inflammation Thread Runs across Medical Laboratory Specialities
Lung, Thomas; Risch, Lorenz; Risch, Martin; Medina Escobar, Pedro; Bodmer, Thomas
2016-01-01
We work on the assumption that four major specialities or sectors of medical laboratory assays, comprising clinical chemistry, haematology, immunology, and microbiology, embraced by genome sequencing techniques, are routinely in use. Medical laboratory markers for inflammation serve as model: they are allotted to most fields of medical lab assays including genomics. Incessant coding of assays aligns each of them in the long lists of big data. As exemplified with the complement gene family, containing C2, C3, C8A, C8B, CFH, CFI, and ITGB2, heritability patterns/risk factors associated with diseases with genetic glitch of complement components are unfolding. The C4 component serum levels depend on sufficient vitamin D whilst low vitamin D is inversely related to IgG1, IgA, and C3 linking vitamin sufficiency to innate immunity. Whole genome sequencing of microbial organisms may distinguish virulent from nonvirulent and antibiotic resistant from nonresistant varieties of the same species and thus can be listed in personal big data banks including microbiological pathology; the big data warehouse continues to grow. PMID:27493451
Modeling Alzheimer's disease cognitive scores using multi-task sparse group lasso.
Liu, Xiaoli; Goncalves, André R; Cao, Peng; Zhao, Dazhe; Banerjee, Arindam
2018-06-01
Alzheimer's disease (AD) is a severe neurodegenerative disorder characterized by loss of memory and reduction in cognitive functions due to progressive degeneration of neurons and their connections, eventually leading to death. In this paper, we consider the problem of simultaneously predicting several different cognitive scores associated with categorizing subjects as normal, mild cognitive impairment (MCI), or Alzheimer's disease (AD) in a multi-task learning framework using features extracted from brain images obtained from ADNI (Alzheimer's Disease Neuroimaging Initiative). To solve the problem, we present a multi-task sparse group lasso (MT-SGL) framework, which estimates sparse features coupled across tasks, and can work with loss functions associated with any Generalized Linear Models. Through comparisons with a variety of baseline models using multiple evaluation metrics, we illustrate the promising predictive performance of MT-SGL on ADNI along with its ability to identify brain regions more likely to help the characterization Alzheimer's disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gordon, Brian A; Zacks, Jeffrey M; Blazey, Tyler; Benzinger, Tammie L S; Morris, John C; Fagan, Anne M; Holtzman, David M; Balota, David A
2015-05-01
There is a growing emphasis on examining preclinical levels of Alzheimer's disease (AD)-related pathology in the absence of cognitive impairment. Previous work examining biomarkers has focused almost exclusively on memory, although there is mounting evidence that attention also declines early in disease progression. In the current experiment, 2 attentional control tasks were used to examine alterations in task-evoked functional magnetic resonance imaging data related to biomarkers of AD pathology. Seventy-one cognitively normal individuals (females = 44, mean age = 63.5 years) performed 2 attention-demanding cognitive tasks in a design that modeled both trial- and task-level functional magnetic resonance imaging changes. Biomarkers included amyloid β42, tau, and phosphorylated tau measured from cerebrospinal fluid and positron emission tomography measures of amyloid deposition. Both tasks elicited widespread patterns of activation and deactivation associated with large task-level manipulations of attention. Importantly, results from both tasks indicated that higher levels of tau and phosphorylated tau pathologies were associated with block-level overactivations of attentional control areas. This suggests early alteration in attentional control with rising levels of AD pathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Deploying Object Oriented Data Technology to the Planetary Data System
NASA Technical Reports Server (NTRS)
Kelly, S.; Crichton, D.; Hughes, J. S.
2003-01-01
How do you provide more than 350 scientists and researchers access to data from every instrument in Odyssey when the data is curated across half a dozen institutions and in different formats and is too big to mail on a CD-ROM anymore? The Planetary Data System (PDS) faced this exact question. The solution was to use a metadata-based middleware framework developed by the Object Oriented Data Technology task at NASA s Jet Propulsion Laboratory. Using OODT, PDS provided - for the first time ever - data from all mission instruments through a single system immediately upon data delivery.
Serb, C
2001-01-01
Disease management was supposed to be the next big thing in health care, thanks largely to the breathtaking pace of advances in technology. But so far reports of a disease management revolution have been premature. While technology vendors promise that their devices can save millions and keep patients healthier longer, providers have been reluctant to make the large investments necessary until they have proof of a payoff. Formal studies and reliable statistics to provide such proof are sorely lacking, although anecdotal evidence appears to signal high hopes for the future.
Novel Agents against Miltefosine-Unresponsive Leishmania donovani
Das, Mousumi; Saha, Gundappa; Saikia, Anil K.
2015-01-01
Visceral leishmaniasis is a deadly endemic disease. Unresponsiveness to the only available oral drug miltefosine poses a big challenge for the chemotherapy of the disease. We report a novel molecule, PS-203 {4-(4,4,8-trimethyl-7-oxo-3-oxabicyclo[3.3.1]non-2-yl)-benzoic acid methyl ester}, as effective against a miltefosine-unresponsive strain of the parasite. Further, combinations of PS-203 with miltefosine were also evaluated and showed promising results against a miltefosine-unresponsive strain. PMID:26392497
Ewing, E Thomas; Gad, Samah; Ramakrishnan, Naren; Reznick, Jeffrey S
2014-10-01
Humanities scholars, particularly historians of health and disease, can benefit from digitized library collections and tools such as topic modeling. Using a case study from the 1918 Spanish Flu epidemic, this paper explores the application of a big humanities approach to understanding the impact of a public health official on the course of the disease and the response of the public, as documented through digitized newspapers and medical periodicals.
NASA Astrophysics Data System (ADS)
Barnosky, A. D.
2012-12-01
While the ultimate extinction driver now—Homo sapiens—is unique with respect to the drivers of past extinctions, comparison of parallel neontological and paleontological information helps calibrate how far the so-called Sixth Mass Extinction has progressed and whether it is inevitable. Such comparisons document that rates of extinction today are approaching or exceeding those that characterized the Big Five Mass Extinctions. Continuation of present extinction rates for vertebrates, for example, would result in 75% species loss—the minimum benchmark exhibited in the Big Five extinctions—within 3 to 22 centuries, assuming constant rates of loss and no threshold effects. Preceding and during each of the Big Five, the global ecosystem experienced major changes in climate, atmospheric chemisty, and ocean chemistry—not unlike what is being observed presently. Nevertheless, only 1-2% of well-assessed modern species have been lost over the past five centuries, still far below what characterized past mass extinctions in the strict paleontological sense. For mammals, adding in the end-Pleistocene species that died out would increase the species-loss percentage by some 5%. If threatened vertebrate species were to actually go extinct, losses would rise to between 14 and 40%, depending on the group. Such observations highlight that, although many species have already had their populations drastically reduced to near-critical levels, the Sixth Mass Extinction has not yet progressed to the point where it is unavoidable. Put another way, the vast majority of species that have occupied the world in concert with Homo sapiens are still alive and are possible to save. That task, however, will require slowing the abnormally high extinction rates that are now in progress, which in turn requires unified efforts to cap human population growth, decrease the average human footprint, reduce fossil fuel use while simultaneously increasing clean energy technologies, integrate valuation of natural capital into economic systems, and rescue species from impacts of inevitable climate change.
ERIC Educational Resources Information Center
Gitlin, Laura N.; Winter, Laraine; Dennis, Marie P.; Corcoran, Mary; Schinfeld, Sandy; Hauck, Walter W.
2002-01-01
Purpose: Little is known about the specific behavioral strategies used by families to manage the physical dependency of persons with Alzheimer's disease and related disorders (ADRD). This study reports the psychometric properties of the Task Management Strategy Index (TMSI), a measure designed to identify actions taken by caregivers to simplify…
ERIC Educational Resources Information Center
Vanhoutte, Sarah; De Letter, Miet; Corthals, Paul; Van Borsel, John; Santens, Patrick
2012-01-01
The present study examined language production skills in Parkinson's disease (PD) patients. A unique cued sentence generation task was created in order to reduce demands on memory and attention. Differences in sentence production abilities according to disease severity and cognitive impairments were assessed. Language samples were obtained from 20…
Xingjun, Guo; Feng, Zhu; Min, Wang; Renyi, Qin
2016-08-01
Crohn's disease of the duodenum is an uncommon condition. Our case was an extremely rare manifestation of Crohn's disease, who presented with obstruction of the pylorus and the first and the second parts of the duodenum. Because of the severity of the obstruction, he underwent laparoscopic pancreaticoduodenectomy. Postoperative pancreatic leakage and bowel fistula were not observed, and there was no morbidity during the follow-up period. There was also no disturbance in digestive function, postoperatively. This is the first case employing laparoscopic pancreaticoduodenectomy to cure benign lesions leading to duodenal obstruction. Minimally invasive laparoscopic pancreaticoduodenectomy technology shows a very big advantage in treating this rare benign Crohn's disease.
Rassart, Jessica; Luyckx, Koen; Goossens, Eva; Oris, Leen; Apers, Silke; Moons, Philip
2016-06-01
This study aimed (1) to identify different personality types in adolescents with congenital heart disease (CHD), and (2) to relate these personality types to psychosocial functioning and several domains of perceived health, both concurrently and prospectively. Hence, this study aimed to expand previous research by adopting a person-centered approach to personality through focusing on personality types rather than singular traits. Adolescents with CHD were selected from the database of pediatric and congenital cardiology of the University Hospitals Leuven. A total of 366 adolescents (15-20 years old) with CHD participated at time 1. These adolescents completed questionnaires on the Big Five personality traits, depressive symptoms, loneliness, and generic and disease-specific domains of health. Nine months later, 313 patients again completed questionnaires. Cluster analysis at time 1 revealed three personality types: resilients (37 %), undercontrollers (34 %), and overcontrollers (29 %), closely resembling typologies obtained in previous community samples. Resilients, under-, and overcontrollers did not differ in terms of disease complexity, but differed on depressive symptoms, loneliness, and generic and disease-specific domains of perceived health at both time-points. Overall, resilients showed the most favorable outcomes and overcontrollers the poorest, with undercontrollers scoring in-between. Personality assessment can help clinicians in identifying adolescents at risk for physical and psychosocial difficulties later in time. In this study, both over- and undercontrollers were identified as high-risk groups. Our findings show that both personality traits and types should be taken into account to obtain a detailed view on the associations between personality and health.
[The analysis of the misdiagnosis big data of the otolaryngology during 2004 to 2013 in China].
Ding, B; Chen, X H
2016-08-05
Objective: The aim of this study is to explore the misdiagnosis status of the otolaryngology in China as well as to provide evidence to reduce misdiagnosis and improve the diagnostic level. Method: The retrieval and management system of the misdiagnosed diseases database developed by Chen Xiaohong was used for searching the literature of the misdiagnosis in otolaryngology.The 10 year' smisdiagnosis literature data of the otolaryngology(from 2004 to 2013) were analyzed including the literature sources, sample size, misdiagnosis rate,misdiagnosis consequences and misdiagnosis reasons. Result: A total of 369 articles were found,including 4211 cases.The average misdiagnosis rate was 25.43% in 51 diagnosed diseases.The top misdiagnosed diseases were nasopharyngeal tuberculosis(84.76%),tuberculous otitis media (75%) and congenital laryngeal cyst(75%).The lowest misdiagnosed disease was nasosinusitis(5.92%).The top three misdiagnosed diseases were tuberculosis of otolaryngology(1216 cases),nasosinusitis(710 cases) and BPPV(697 cases).After statistical analysis,we found that 97.22% of the misdiagnosed patients were grade Ⅲ consequences (that is the misdiagnosis and mistreatment does not cause adverse consequences),but there were still 10 cases caused gradeⅠconsequences(death or sequela).The main causes of the misdiagnosis were lack of diagnosis experience, non detailed interrogation and physical examination and non targeted examinations. Conclusion: The big data of the 10 years reflects the misdiagnosis phenomenon in otolaryngology to some extent.Neurologist, stomatologist and ophthalmologist should be familiar to the main points of the differential diagnosis diseases of the otolaryngology and strive to reduce the clinical misdiagnosis and mistreatment. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Enabling phenotypic big data with PheNorm.
Yu, Sheng; Ma, Yumeng; Gronsbell, Jessica; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Liao, Katherine P; Cai, Tianxi
2018-01-01
Electronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training. The most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification. We validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference. The accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Translation in Data Mining to Advance Personalized Medicine for Health Equity.
Estape, Estela S; Mays, Mary Helen; Sternke, Elizabeth A
2016-01-01
Personalized medicine is the development of 'tailored' therapies that reflect traditional medical approaches, with the incorporation of the patient's unique genetic profile and the environmental basis of the disease. These individualized strategies encompass disease prevention, diagnosis, as well as treatment strategies. Today's healthcare workforce is faced with the availability of massive amounts of patient- and disease-related data. When mined effectively, these data will help produce more efficient and effective diagnoses and treatment, leading to better prognoses for patients at both the individual and population level. Designing preventive and therapeutic interventions for those patients who will benefit most while minimizing side effects and controlling healthcare costs, requires bringing diverse data sources together in an analytic paradigm. A resource to clinicians in the development and application of personalized medicine is largely facilitated, perhaps even driven, by the analysis of "big data". For example, the availability of clinical data warehouses is a significant resource for clinicians in practicing personalized medicine. These "big data" repositories can be queried by clinicians, using specific questions, with data used to gain an understanding of challenges in patient care and treatment. Health informaticians are critical partners to data analytics including the use of technological infrastructures and predictive data mining strategies to access data from multiple sources, assisting clinicians' interpretation of data and development of personalized, targeted therapy recommendations. In this paper, we look at the concept of personalized medicine, offering perspectives in four important, influencing topics: 1) the availability of 'big data' and the role of biomedical informatics in personalized medicine, 2) the need for interdisciplinary teams in the development and evaluation of personalized therapeutic approaches, and 3) the impact of electronic medical record systems and clinical data warehouses on the field of personalized medicine. In closing, we present our fourth perspective, an overview to some of the ethical concerns related to personalized medicine and health equity.
Developments in clinical trials: a Pharma Matters report.
Arjona, A; Nuskey, B; Rabasseda, X; Arias, E
2014-08-01
As the pharmaceutical industry strives to meet the ever-increasing complexity of drug development, new technology in clinical trials has become a beacon of hope. With big data comes the promise of accelerated patient recruitment, real-time monitoring of clinical trials, bioinformatics empowerment of quicker phase progression, and the overwhelming benefits of precision medicine for select trials. Risk-based monitoring stands to benefit as well. With a strengthening focus on centralized data by the FDA and industry's transformative initiative, TransCelerate, a new era in trial risk mitigation has begun. The traditional method of intensive on-site monitoring is becoming a thing of the past as statistical, real-time analysis of site and trial-wide data provides the means to monitor with greater efficiency and effectiveness from afar. However, when it comes to big data, there are challenges that lie ahead. Patient privacy, commercial investment protection, technology woes and data variability are all limitations to be met with considerable thought. At the Annual Meeting of the American Academy of Dermatology this year, clinical trials on psoriasis, atopic dermatitis and other skin diseases were discussed in detail. This review of clinical research reports on novel therapies for psoriasis and atopic dermatitis reveals the impact of these diseases and the drug candidates that have been successful in phase II and III studies. Data-focused highlights of novel dermatological trials, as well as real-life big data approaches and an insight on the new methodology of risk-based monitoring, are all discussed in this edition of Developments in Clinical Trials. Copyright 2014 Prous Science, S.A.U. or its licensors. All rights reserved.
Memory binding and white matter integrity in familial Alzheimer’s disease
Saarimäki, Heini; Bastin, Mark E.; Londoño, Ana C.; Pettit, Lewis; Lopera, Francisco; Della Sala, Sergio; Abrahams, Sharon
2015-01-01
Binding information in short-term and long-term memory are functions sensitive to Alzheimer’s disease. They have been found to be affected in patients who meet criteria for familial Alzheimer’s disease due to the mutation E280A of the PSEN1 gene. However, only short-term memory binding has been found to be affected in asymptomatic carriers of this mutation. The neural correlates of this dissociation are poorly understood. The present study used diffusion tensor magnetic resonance imaging to investigate whether the integrity of white matter structures could offer an account. A sample of 19 patients with familial Alzheimer’s disease, 18 asymptomatic carriers and 21 non-carrier controls underwent diffusion tensor magnetic resonance imaging, neuropsychological and memory binding assessment. The short-term memory binding task required participants to detect changes across two consecutive screens displaying arrays of shapes, colours, or shape-colour bindings. The long-term memory binding task was a Paired Associates Learning Test. Performance on these tasks were entered into regression models. Relative to controls, patients with familial Alzheimer’s disease performed poorly on both memory binding tasks. Asymptomatic carriers differed from controls only in the short-term memory binding task. White matter integrity explained poor memory binding performance only in patients with familial Alzheimer’s disease. White matter water diffusion metrics from the frontal lobe accounted for poor performance on both memory binding tasks. Dissociations were found in the genu of corpus callosum which accounted for short-term memory binding impairments and in the hippocampal part of cingulum bundle which accounted for long-term memory binding deficits. The results indicate that white matter structures in the frontal and temporal lobes are vulnerable to the early stages of familial Alzheimer’s disease and their damage is associated with impairments in two memory binding functions known to be markers for Alzheimer’s disease. PMID:25762465
Memory binding and white matter integrity in familial Alzheimer's disease.
Parra, Mario A; Saarimäki, Heini; Bastin, Mark E; Londoño, Ana C; Pettit, Lewis; Lopera, Francisco; Della Sala, Sergio; Abrahams, Sharon
2015-05-01
Binding information in short-term and long-term memory are functions sensitive to Alzheimer's disease. They have been found to be affected in patients who meet criteria for familial Alzheimer's disease due to the mutation E280A of the PSEN1 gene. However, only short-term memory binding has been found to be affected in asymptomatic carriers of this mutation. The neural correlates of this dissociation are poorly understood. The present study used diffusion tensor magnetic resonance imaging to investigate whether the integrity of white matter structures could offer an account. A sample of 19 patients with familial Alzheimer's disease, 18 asymptomatic carriers and 21 non-carrier controls underwent diffusion tensor magnetic resonance imaging, neuropsychological and memory binding assessment. The short-term memory binding task required participants to detect changes across two consecutive screens displaying arrays of shapes, colours, or shape-colour bindings. The long-term memory binding task was a Paired Associates Learning Test. Performance on these tasks were entered into regression models. Relative to controls, patients with familial Alzheimer's disease performed poorly on both memory binding tasks. Asymptomatic carriers differed from controls only in the short-term memory binding task. White matter integrity explained poor memory binding performance only in patients with familial Alzheimer's disease. White matter water diffusion metrics from the frontal lobe accounted for poor performance on both memory binding tasks. Dissociations were found in the genu of corpus callosum which accounted for short-term memory binding impairments and in the hippocampal part of cingulum bundle which accounted for long-term memory binding deficits. The results indicate that white matter structures in the frontal and temporal lobes are vulnerable to the early stages of familial Alzheimer's disease and their damage is associated with impairments in two memory binding functions known to be markers for Alzheimer's disease. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wildevuur, Sabine E; Simonse, Lianne W L
2015-03-27
Person-centered information and communication technology (ICT) could encourage patients to take an active part in their health care and decision-making process, and make it possible for patients to interact directly with health care providers and services about their personal health concerns. Yet, little is known about which ICT interventions dedicated to person-centered care (PCC) and connected-care interactions have been studied, especially for shared care management of chronic diseases. The aim of this research is to investigate the extent, range, and nature of these research activities and identify research gaps in the evidence base of health studies regarding the "big 5" chronic diseases: diabetes mellitus, cardiovascular disease, chronic respiratory disease, cancer, and stroke. The objective of this paper was to review the literature and to scope the field with respect to 2 questions: (1) which ICT interventions have been used to support patients and health care professionals in PCC management of the big 5 chronic diseases? and (2) what is the impact of these interventions, such as on health-related quality of life and cost efficiency? This research adopted a scoping review method. Three electronic medical databases were accessed: PubMed, EMBASE, and Cochrane Library. The research reviewed studies published between January 1989 and December 2013. In 5 stages of systematic scanning and reviewing, relevant studies were identified, selected, and charted. Then we collated, summarized, and reported the results. From the initial 9380 search results, we identified 350 studies that qualified for inclusion: diabetes mellitus (n=103), cardiovascular disease (n=89), chronic respiratory disease (n=73), cancer (n=67), and stroke (n=18). Persons with one of these chronic conditions used ICT primarily for self-measurement of the body, when interacting with health care providers, with the highest rates of use seen in chronic respiratory (63%, 46/73) and cardiovascular (53%, 47/89) diseases. We found 60 relevant studies (17.1%, 60/350) on person-centered shared management ICT, primarily using telemedicine systems as personalized ICT. The highest impact measured related to the increase in empowerment (15.4%, 54/350). Health-related quality of life accounted for 8%. The highest impact connected to health professionals was an increase in clinical outcome (11.7%, 41/350). The impacts on organization outcomes were decrease in hospitalization (12.3%, 43/350) and increase of cost efficiency (10.9%, 38/350). This scoping review outlined ICT-enabled PCC in chronic disease management. Persons with a chronic disease could benefit from an ICT-enabled PCC approach, but ICT-PCC also yields organizational paybacks. It could lead to an increase in health care usage, as reported in some studies. Few interventions could be regarded as "fully" addressing PCC. This review will be especially helpful to those deciding on areas where further development of research or implementation of ICT-enabled PCC may be warranted.
Simonse, Lianne WL
2015-01-01
Background Person-centered information and communication technology (ICT) could encourage patients to take an active part in their health care and decision-making process, and make it possible for patients to interact directly with health care providers and services about their personal health concerns. Yet, little is known about which ICT interventions dedicated to person-centered care (PCC) and connected-care interactions have been studied, especially for shared care management of chronic diseases. The aim of this research is to investigate the extent, range, and nature of these research activities and identify research gaps in the evidence base of health studies regarding the “big 5” chronic diseases: diabetes mellitus, cardiovascular disease, chronic respiratory disease, cancer, and stroke. Objective The objective of this paper was to review the literature and to scope the field with respect to 2 questions: (1) which ICT interventions have been used to support patients and health care professionals in PCC management of the big 5 chronic diseases? and (2) what is the impact of these interventions, such as on health-related quality of life and cost efficiency? Methods This research adopted a scoping review method. Three electronic medical databases were accessed: PubMed, EMBASE, and Cochrane Library. The research reviewed studies published between January 1989 and December 2013. In 5 stages of systematic scanning and reviewing, relevant studies were identified, selected, and charted. Then we collated, summarized, and reported the results. Results From the initial 9380 search results, we identified 350 studies that qualified for inclusion: diabetes mellitus (n=103), cardiovascular disease (n=89), chronic respiratory disease (n=73), cancer (n=67), and stroke (n=18). Persons with one of these chronic conditions used ICT primarily for self-measurement of the body, when interacting with health care providers, with the highest rates of use seen in chronic respiratory (63%, 46/73) and cardiovascular (53%, 47/89) diseases. We found 60 relevant studies (17.1%, 60/350) on person-centered shared management ICT, primarily using telemedicine systems as personalized ICT. The highest impact measured related to the increase in empowerment (15.4%, 54/350). Health-related quality of life accounted for 8%. The highest impact connected to health professionals was an increase in clinical outcome (11.7%, 41/350). The impacts on organization outcomes were decrease in hospitalization (12.3%, 43/350) and increase of cost efficiency (10.9%, 38/350). Conclusions This scoping review outlined ICT-enabled PCC in chronic disease management. Persons with a chronic disease could benefit from an ICT-enabled PCC approach, but ICT-PCC also yields organizational paybacks. It could lead to an increase in health care usage, as reported in some studies. Few interventions could be regarded as “fully” addressing PCC. This review will be especially helpful to those deciding on areas where further development of research or implementation of ICT-enabled PCC may be warranted. PMID:25831199
Evaluation of a novel Serious Game based assessment tool for patients with Alzheimer's disease.
Vallejo, Vanessa; Wyss, Patric; Rampa, Luca; Mitache, Andrei V; Müri, René M; Mosimann, Urs P; Nef, Tobias
2017-01-01
Despite growing interest in developing ecological assessment of difficulties in patients with Alzheimer's disease new methods assessing the cognitive difficulties related to functional activities are missing. To complete current evaluation, the use of Serious Games can be a promising approach as it offers the possibility to recreate a virtual environment with daily living activities and a precise and complete cognitive evaluation. The aim of the present study was to evaluate the usability and the screening potential of a new ecological tool for assessment of cognitive functions in patients with Alzheimer's disease. Eighteen patients with Alzheimer's disease and twenty healthy controls participated to the study. They were asked to complete six daily living virtual tasks assessing several cognitive functions: three navigation tasks, one shopping task, one cooking task and one table preparation task following a one-day scenario. Usability of the game was evaluated through a questionnaire and through the analysis of the computer interactions for the two groups. Furthermore, the performances in terms of time to achieve the task and percentage of completion on the several tasks were recorded. Results indicate that both groups subjectively found the game user friendly and they were objectively able to play the game without computer interactions difficulties. Comparison of the performances between the two groups indicated a significant difference in terms of percentage of achievement of the several tasks and in terms of time they needed to achieve the several tasks. This study suggests that this new Serious Game based assessment tool is a user-friendly and ecological method to evaluate the cognitive abilities related to the difficulties patients can encounter in daily living activities and can be used as a screening tool as it allowed to distinguish Alzheimer's patient's performance from healthy controls.
Game-XP: Action Games as Experimental Paradigms for Cognitive Science.
Gray, Wayne D
2017-04-01
Why games? How could anyone consider action games an experimental paradigm for Cognitive Science? In 1973, as one of three strategies he proposed for advancing Cognitive Science, Allen Newell exhorted us to "accept a single complex task and do all of it." More specifically, he told us that rather than taking an "experimental psychology as usual approach," we should "focus on a series of experimental and theoretical studies around a single complex task" so as to demonstrate that our theories of human cognition were powerful enough to explain "a genuine slab of human behavior" with the studies fitting into a detailed theoretical picture. Action games represent the type of experimental paradigm that Newell was advocating and the current state of programming expertise and laboratory equipment, along with the emergence of Big Data and naturally occurring datasets, provide the technologies and data needed to realize his vision. Action games enable us to escape from our field's regrettable focus on novice performance to develop theories that account for the full range of expertise through a twin focus on expertise sampling (across individuals) and longitudinal studies (within individuals) of simple and complex tasks. Copyright © 2017 Cognitive Science Society, Inc.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA.
How schools can effectively work with exceptional students who have communicable diseases was the focus of an eight-member Task Force appointed by the Council for Exceptional Children (CEC) Governmental Relations Committee. Its report begins with an overview of existing guidelines and defines specific communicable diseases (Hepatitis B,…
Dual-tasks and walking fast: relationship to extra-pyramidal signs in advanced Alzheimer disease.
Camicioli, Richard; Bouchard, Thomas; Licis, Lisa
2006-10-25
Extra-pyramidal signs (EPS) and cadence predicted falls risk in patients with advanced Alzheimer disease (AD). Dual task performance predicts falls with variable success. Dual-task performance and walking fast were examined in advanced AD patients with EPS (EPS+, >3 modified Unified Parkinson's Disease Rating Scale [UPDRS] signs) or without EPS (EPS-, three or less UPDRS signs). Demographics, mental and functional status, behavioral impairment, EPS, and quantitative gait measures (GaitRite) were determined. The effects of an automatic dual-task (simple counting) and of walking fast on spatial and temporal gait characteristics were compared between EPS+ and EPS- subjects using a repeated measures design. Cadence decreased, while stride time, swing time and variability in swing time increased with the dual task. Results were insignificant after adjusting for secondary task performance. With walking fast, speed, cadence and stride length increased while stride time, swing time and double support time decreased. Although EPS+ subjects were slower and had decreased stride length, dual task and walking fast effects did not differ from EPS- subjects. Patient characteristics, the type of secondary task and the specific gait measures examined vary in the literature. In this moderately to severely demented population, EPS did not affect "unconscious" (dual task) or "conscious" (walking fast) gait modulation. Given their high falls risk, and retained ability to modulate walking, EPS+ AD patients may be ideal candidates for interventions aimed at preventing falls.
Standing balance in individuals with Parkinson's disease during single and dual-task conditions.
Fernandes, Ângela; Coelho, Tiago; Vitória, Ana; Ferreira, Augusto; Santos, Rubim; Rocha, Nuno; Fernandes, Lia; Tavares, João Manuel R S
2015-09-01
This study aimed to examine the differences in standing balance between individuals with Parkinson's disease (PD) and subjects without PD (control group), under single and dual-task conditions. A cross-sectional study was designed using a non-probabilistic sample of 110 individuals (50 participants with PD and 60 controls) aged 50 years old and over. The individuals with PD were in the early or middle stages of the disease (characterized by Hoehn and Yahr as stages 1-3). The standing balance was assessed by measuring the centre of pressure (CoP) displacement in single-task (eyes-open/eyes-closed) and dual-task (while performing two different verbal fluency tasks). No significant differences were found between the groups regarding sociodemographic variables. In general, the standing balance of the individuals with PD was worse than the controls, as the CoP displacement across tasks was significantly higher for the individuals with PD (p<0.01), both in anteroposterior and mediolateral directions. Moreover, there were significant differences in the CoP displacement based parameters between the conditions, mainly between the eyes-open condition and the remaining conditions. However, there was no significant interaction found between group and condition, which suggests that changes in the CoP displacement between tasks were not influenced by having PD. In conclusion, this study shows that, although individuals with PD had a worse overall standing balance than individuals without the disease, the impact of performing an additional task on the CoP displacement is similar for both groups. Copyright © 2015 Elsevier B.V. All rights reserved.
Natsopoulos, D; Katsarou, Z; Alevriadou, A; Grouios, G; Bostantzopoulou, S; Mentenopoulos, G
1997-09-01
In the present study, fifty-four subjects were tested; twenty-seven with idiopathic Parkinson's disease and twenty-seven normal controls matched in age, education, verbal ability, level of depression, sex and socio-economic status. The subjects were tested on eight tasks. Five of the tasks were the classic deductive reasoning syllogisms, modus ponens, modus tollendo tollens, affirming the consequent, denying the antecedent and three-term series problems phrased in a factual context (brief scripts). Three of the tasks were inductive reasoning, including logical inferences, metaphors and similes. All tasks were presented to subjects in a multiple choice format. The results, overall, have shown nonsignificant differences between the two groups in deductive and inductive reasoning, an ability traditionally associated with frontal lobes involvement. Of the comparisons performed between subgroups of the patients and normal controls concerning disease duration, disease onset and predominant involvement of the left and/or right hemisphere, significant differences were found between patients with earlier disease onset and normal controls and between bilaterally affected patients and normal controls, demonstrating an additive effect of lateralization to reasoning ability.
Gianfredi, Vincenza; Bragazzi, Nicola Luigi; Nucci, Daniele; Martini, Mariano; Rosselli, Roberto; Minelli, Liliana; Moretti, Massimo
2018-01-01
According to the World Health Organization (WHO), communicable tropical and sub-tropical diseases occur solely, or mainly in the tropics, thriving in hot, and humid conditions. Some of these disorders termed as neglected tropical diseases are particularly overlooked. Communicable tropical/sub-tropical diseases represent a diverse group of communicable disorders occurring in 149 countries, favored by tropical and sub-tropical conditions, affecting more than one billion people and imposing a dramatic societal and economic burden. A systematic review of the extant scholarly literature was carried out, searching in PubMed/MEDLINE and Scopus. The search string used included proper keywords, like big data, nontraditional data sources, social media, social networks, infodemiology, infoveillance, novel data streams (NDS), digital epidemiology, digital behavior, Google Trends, Twitter, Facebook, YouTube, Instagram, Pinterest, Ebola, Zika, dengue, Chikungunya, Chagas, and the other neglected tropical diseases. 47 original, observational studies were included in the current systematic review: 1 focused on Chikungunya, 6 on dengue, 19 on Ebola, 2 on Malaria, 1 on Mayaro virus, 2 on West Nile virus, and 16 on Zika. Fifteen were dedicated on developing and validating forecasting techniques for real-time monitoring of neglected tropical diseases, while the remaining studies investigated public reaction to infectious outbreaks. Most studies explored a single nontraditional data source, with Twitter being the most exploited tool (25 studies). Even though some studies have shown the feasibility of utilizing NDS as an effective tool for predicting epidemic outbreaks and disseminating accurate, high-quality information concerning neglected tropical diseases, some gaps should be properly underlined. Out of the 47 articles included, only 7 were focusing on neglected tropical diseases, while all the other covered communicable tropical/sub-tropical diseases, and the main determinant of this unbalanced coverage seems to be the media impact and resonance. Furthermore, efforts in integrating diverse NDS should be made. As such, taking into account these limitations, further research in the field is needed.
EarthServer: a Summary of Achievements in Technology, Services, and Standards
NASA Astrophysics Data System (ADS)
Baumann, Peter
2015-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level coverage query language. Reviewers have attested EarthServer that "With no doubt the project has been shaping the Big Earth Data landscape through the standardization activities within OGC, ISO and beyond". We present the project approach, its outcomes and impact on standardization and Big Data technology, and vistas for the future.
Self-monitoring of driving speed.
Etzioni, Shelly; Erev, Ido; Ishaq, Robert; Elias, Wafa; Shiftan, Yoram
2017-09-01
In-vehicle data recorders (IVDR) have been found to facilitate safe driving and are highly valuable in accident analysis. Nevertheless, it is not easy to convince drivers to use them. Part of the difficulty is related to the "Big Brother" concern: installing IVDR impairs the drivers' privacy. The "Big Brother" concern can be mitigated by adding a turn-off switch to the IVDR. However, this addition comes at the expense of increasing speed variability between drivers, which is known to impair safety. The current experimental study examines the significance of this negative effect of a turn-off switch under two experimental settings representing different incentive structures: small and large fines for speeding. 199 students were asked to participate in a computerized speeding dilemma task, where they could control the speed of their "car" using "brake" and "speed" buttons, corresponding to automatic car foot pedals. The participants in two experimental conditions had IVDR installed in their "cars", and were told that they could turn it off at any time. Driving with active IVDR implied some probability of "fines" for speeding, and the two experimental groups differed with respect to the fine's magnitude, small or large. The results indicate that the option to use IVDR reduced speeding and speed variance. In addition, the results indicate that the reduction of speed variability was maximal in the small fine group. These results suggest that using IVDR with gentle fines and with a turn-off option maintains the positive effect of IVDR, addresses the "Big Brother" concern, and does not increase speed variance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Real-time Medical Emergency Response System: Exploiting IoT and Big Data for Public Health.
Rathore, M Mazhar; Ahmad, Awais; Paul, Anand; Wan, Jiafu; Zhang, Daqiang
2016-12-01
Healthy people are important for any nation's development. Use of the Internet of Things (IoT)-based body area networks (BANs) is increasing for continuous monitoring and medical healthcare in order to perform real-time actions in case of emergencies. However, in the case of monitoring the health of all citizens or people in a country, the millions of sensors attached to human bodies generate massive volume of heterogeneous data, called "Big Data." Processing Big Data and performing real-time actions in critical situations is a challenging task. Therefore, in order to address such issues, we propose a Real-time Medical Emergency Response System that involves IoT-based medical sensors deployed on the human body. Moreover, the proposed system consists of the data analysis building, called "Intelligent Building," depicted by the proposed layered architecture and implementation model, and it is responsible for analysis and decision-making. The data collected from millions of body-attached sensors is forwarded to Intelligent Building for processing and for performing necessary actions using various units such as collection, Hadoop Processing (HPU), and analysis and decision. The feasibility and efficiency of the proposed system are evaluated by implementing the system on Hadoop using an UBUNTU 14.04 LTS coreTMi5 machine. Various medical sensory datasets and real-time network traffic are considered for evaluating the efficiency of the system. The results show that the proposed system has the capability of efficiently processing WBAN sensory data from millions of users in order to perform real-time responses in case of emergencies.
Speech fluency profile on different tasks for individuals with Parkinson's disease.
Juste, Fabiola Staróbole; Andrade, Claudia Regina Furquim de
2017-07-20
To characterize the speech fluency profile of patients with Parkinson's disease. Study participants were 40 individuals of both genders aged 40 to 80 years divided into 2 groups: Research Group - RG (20 individuals with diagnosis of Parkinson's disease) and Control Group - CG (20 individuals with no communication or neurological disorders). For all of the participants, three speech samples involving different tasks were collected: monologue, individual reading, and automatic speech. The RG presented a significant larger number of speech disruptions, both stuttering-like and typical dysfluencies, and higher percentage of speech discontinuity in the monologue and individual reading tasks compared with the CG. Both groups presented reduced number of speech disruptions (stuttering-like and typical dysfluencies) in the automatic speech task; the groups presented similar performance in this task. Regarding speech rate, individuals in the RG presented lower number of words and syllables per minute compared with those in the CG in all speech tasks. Participants of the RG presented altered parameters of speech fluency compared with those of the CG; however, this change in fluency cannot be considered a stuttering disorder.
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2014-05-01
To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.
The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.
2015-12-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.
Injuries and illnesses of big game hunters in western Colorado: a 9-year analysis.
Reishus, Allan D
2007-01-01
The purpose of this study was to characterize big game hunter visits to a rural hospital's emergency department (ED). Using data collected on fatalities, injuries, and illnesses over a 9-year period, trends were noted and comparisons made to ED visits of alpine skiers, swimmers, and bicyclists. Out-of-hospital hunter fatalities reported by the county coroner's office were also reviewed. Cautionary advice is offered for potential big game hunters and their health care providers. Self-identified hunters were noted in the ED log of a rural Colorado hospital from 1997 to 2005, and injury or illness and outcome were recorded. Additional out-of-hospital mortality data were obtained from the county coroner's office. The estimated total number of big game hunters in the hospital's service area and their average days of hunting were reported by the Colorado Division of Wildlife. The frequencies of hunters' illnesses, injuries, and deaths were calculated. A total of 725 ED visits--an average of 80 per year--were recorded. Nearly all visits were in the prime hunting months of September to November. Twenty-seven percent of the hunter ED patients were Colorado residents, and 73% were from out of state. Forty-five percent of the visits were for trauma, 31% for medical illnesses, and 24% were labeled "other." The most common medical visits (105) were for cardiac signs and symptoms, and all of the ED deaths (4) were attributed to cardiac causes. The most common trauma diagnosis was laceration (151), the majority (113) of which came from accidental knife injuries, usually while the hunter was field dressing big game animals. Gunshot wounds (4, < 1%) were rare. Horse-related injuries to hunters declined while motor vehicle- and all-terrain vehicle (ATV)-related injuries increased. The five out-of-hospital deaths were cardiac related (3), motor vehicle related (1), and firearm related (1). Fatal outcomes in big game hunters most commonly resulted from cardiac diseases. Gunshot injuries and mortalities were very low in this population. Knife injuries were common. Hunters and their health care providers should consider a thorough cardiac evaluation prior to big game hunts. Hunter safety instructors should consider teaching aspects of safe knife use. Consideration should be given to requiring and improving ATV driver education.
Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform
NASA Astrophysics Data System (ADS)
Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut
2016-04-01
The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development of some Earth Observation oriented applications based on flexible description of processing, and adaptive and portable execution over Cloud infrastructure. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [3] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015). [4] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [5] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).
First Born amplitude for transitions from a circular state to a state of large (l, m)
NASA Astrophysics Data System (ADS)
Dewangan, D. P.
2005-01-01
The use of cylindrical polar coordinates instead of the conventional spherical polar coordinates enables us to derive compact expressions of the first Born amplitude for some selected sets of transitions from an arbitrary initial circular \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle state to a final \\big|\\psi_{n_f,l_f,m_f}\\big\\rangle state of large (lf, mf). The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-2}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-3}\\big\\rangle transitions are expressed in terms of the Jacobi polynomials which serve as suitable starting points for constructing complete solutions over the bound energy levels of hydrogen-like atoms. The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-2)}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-3)}\\big\\rangle transitions are in simple algebraic forms and are directly applicable to all possible values of ni and nf. It emerges that the method can be extended to evaluate the first Born amplitude for many other transitions involving states of large (l, m).
Barbieri, Fabio A; Polastri, Paula F; Baptista, André M; Lirani-Silva, Ellen; Simieli, Lucas; Orcioli-Silva, Diego; Beretta, Victor S; Gobbi, Lilian T B
2016-04-01
The aim of this study was to investigate the effects of disease severity and medication state on postural control asymmetry during challenging tasks in individuals with Parkinson's disease (PD). Nineteen people with PD and 11 neurologically healthy individuals performed three standing task conditions: bipedal standing, tandem and unipedal adapted standing; the individuals with PD performed the tasks in ON and OFF medication state. The participants with PD were distributed into 2 groups according to disease severity: unilateral group (n=8) and bilateral group (n=11). The two PD groups performed the evaluations both under and without the medication. Two force plates were used to analyze the posture. The symmetric index was calculated for various of center of pressure. ANOVA one-way (groups) and two-way (PD groups×medication), with repeated measures for medication, were calculated. For main effects of group, the bilateral group was more asymmetric than CG. For main effects of medication, only unipedal adapted standing presented effects of PD medication. There was PD groups×medication interaction. Under the effects of medication, the unilateral group presented lower asymmetry of RMS in anterior-posterior direction and area than the bilateral group in unipedal adapted standing. In addition, the unilateral group presented lower asymmetry of mean velocity, RMS in anterior-posterior direction and area in unipedal standing and area in tandem adapted standing after a medication dose. Postural control asymmetry during challenging postural tasks was dependent on disease severity and medication state in people with PD. The bilateral group presented higher postural control asymmetry than the control and unilateral groups in challenging postural tasks. Finally, the medication dose was able to reduce postural control asymmetry in the unilateral group during challenging postural tasks. Copyright © 2015 Elsevier B.V. All rights reserved.
High degree-of-freedom dynamic manipulation
NASA Astrophysics Data System (ADS)
Murphy, Michael P.; Stephens, Benjamin; Abe, Yeuhi; Rizzi, Alfred A.
2012-06-01
The creation of high degree of freedom dynamic mobile manipulation techniques and behaviors will allow robots to accomplish difficult tasks in the field. We are investigating the use of the body and legs of legged robots to improve the strength, velocity, and workspace of an integrated manipulator to accomplish dynamic manipulation. This is an especially challenging task, as all of the degrees of freedom are active at all times, the dynamic forces generated are high, and the legged system must maintain robust balance throughout the duration of the tasks. To accomplish this goal, we are utilizing trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning the trajectories in a 13 dimensional space. Covariance Matrix Adaptation techniques are utilized to optimize for several criteria such as payload capability and task completion speed while also obeying constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate feed-forward terms, which are subsequently used online to improve tracking and maintain low controller gains. Some initial results on one of our existing balancing quadruped robots with an additional human-arm-like manipulator are demonstrated on robot hardware, including dynamic lifting and throwing of heavy objects 16.5kg cinder blocks, using motions that resemble a human athlete more than typical robotic motions. Increased payload capacity is accomplished through coordinated body motion.
Paving the Road to Health Together: Case Studies of Interagency Collaboration
ERIC Educational Resources Information Center
Posner, Marc
2004-01-01
State health agencies are asked to do a big job. With the advent of an increased focus on bioterrorism preparedness, and the emergence of diseases such as West Nile Virus and SARS, the job is becoming bigger. Yet health agencies still have responsibilities for more traditional concerns, since these problems remain threats to the public. State…
[Weber-Christian syndrome and chronic pancreatitis (author's transl)].
Laval-Jeantet, M; Puissant, A; Gombergh, R; Blanchet-Bardon, C; Delmas, P F; Sohier, J
1982-04-22
The authors describe a case of cytosteatonecrosis associated with chronic pancreatitis. Clinical, biological (amylasemia and amylasuria) and radiological (arteriography) signs of pancreatic disease are noticed as well as important radiological lesions of the limbs. The cutaneous lesions are hypodermic nodules located on the lower limbs with swelling of the right big toe and a cutaneous abrasion.
USDA-ARS?s Scientific Manuscript database
Introduction Surveillance for influenza A viruses (IAV) circulating in pigs and other non-human mammals has been chronically underfunded and virtually nonexistent in many areas of the world. This deficit continues in spite of our knowledge that influenza is a disease shared between man and pig fro...
Lessons from 15 years of monitoring sudden oak death and forest dynamics in California forests
Margaret Metz; J. Morgan Varner; Ross Meentemeyer; Kerri Frangioso; David Rizzo
2017-01-01
Monitoring host composition and disease impacts began 15 years ago in what would become a network of permanent forest monitoring plots throughout the known and predicted range of Phytophthora ramorum in California coastal forests. Stretching ~500 miles from Big Sur to the Oregon border, the network captures variation in interactions among...
2014-01-01
Background Multiple tasking is an integral part of daily mobility. Patients with Parkinson’s disease have dual tasking difficulties due to their combined motor and cognitive deficits. Two contrasting physiotherapy interventions have been proposed to alleviate dual tasking difficulties: either to discourage simultaneous execution of dual tasks (consecutive training); or to practice their concurrent use (integrated training). It is currently unclear which of these training methods should be adopted to achieve safe and consolidated dual task performance in daily life. Therefore, the proposed randomized controlled trial will compare the effects of integrated versus consecutive training of dual tasking (tested by combining walking with cognitive exercises). Methods and design Hundred and twenty patients with Parkinson’s disease will be recruited to participate in this multi-centered, single blind, randomized controlled trial. Patients in Hoehn & Yahr stage II-III, with or without freezing of gait, and who report dual task difficulties will be included. All patients will undergo a six-week control period without intervention after which they will be randomized to integrated or consecutive task practice. Training will consist of standardized walking and cognitive exercises delivered at home four times a week during six weeks. Treatment is guided by a physiotherapist twice a week and consists of two sessions of self-practice using an MP3 player. Blinded testers will assess patients before and after the control period, after the intervention period and after a 12-week follow-up period. The primary outcome measure is dual task gait velocity, i.e. walking combined with a novel untrained cognitive task to evaluate the consolidation of learning. Secondary outcomes include several single and dual task gait and cognitive measures, functional outcomes and a quality of life scale. Falling will be recorded as a possible adverse event using a weekly phone call for the entire study period. Discussion This randomized study will evaluate the effectiveness and safety of integrated versus consecutive task training in patients with Parkinson’s disease. The study will also highlight whether dual task gait training leads to robust motor learning effects, and whether these can be retained and carried-over to untrained dual tasks and functional mobility. Trial registration Clinicaltrials.gov NCT01375413. PMID:24674594
Akperova, G A
2014-11-01
IThe purpose of this study was to evaluate of the efficiency of RDBH-method and Big DyeTM Terminator technology in an accurate diagnosis of β-thalassemia and the allelic polymorphism of β-globin cluster. It was done a complete hematology analysis (HB, MCH, MCV, MCHC, RBC, Hct, HbA2, HbF, Serum iron, Serum ferritin at four children (males, 6-10 years old) and their parents. Molecular analysis included Reverse Dot-Blot Hybridization StripAssay (RDBH) and DNA sequencing on ABI PRISM Big DyeTM Terminator. Hematologic and molecular parameters were contradictory. The homozygosity for β0-thalassemia (β0IVS2.1[G>A] and β0codon 8[-AA]) at three boys with the mild clinical manifestation and heterozygosity of their parents for mutations, and the absence of β-globin mutations at parents and a boy who holds monthly transfusion was established by RDBH-analysis. DNA sequencing by technology Big DyeTM Terminator showed polymorphism at positions -551 and -521 of Cap5'-region (-650-250) - (AT)7(T)7 and (AT)8(T)5. Application of the integrated clinical-molecular approach is an ideal method for an accurate diagnosis, identification of asymptomatic carriers and a reduce of the risk of complications from β-thalassemia, moreover screening of γG-gene and the level of fetal hemoglobin in early childhood will help manage of β-thalassemia clinic and prevent heavy consequences of the disease.
Anand, Vibha; Rosenman, Marc B; Downs, Stephen M
2013-09-01
To develop a map of disease associations exclusively using two publicly available genetic sources: the catalog of single nucleotide polymorphisms (SNPs) from the HapMap, and the catalog of Genome Wide Association Studies (GWAS) from the NHGRI, and to evaluate it with a large, long-standing electronic medical record (EMR). A computational model, In Silico Bayesian Integration of GWAS (IsBIG), was developed to learn associations among diseases using a Bayesian network (BN) framework, using only genetic data. The IsBIG model (I-Model) was re-trained using data from our EMR (M-Model). Separately, another clinical model (C-Model) was learned from this training dataset. The I-Model was compared with both the M-Model and the C-Model for power to discriminate a disease given other diseases using a test dataset from our EMR. Area under receiver operator characteristics curve was used as a performance measure. Direct associations between diseases in the I-Model were also searched in the PubMed database and in classes of the Human Disease Network (HDN). On the basis of genetic information alone, the I-Model linked a third of diseases from our EMR. When compared to the M-Model, the I-Model predicted diseases given other diseases with 94% specificity, 33% sensitivity, and 80% positive predictive value. The I-Model contained 117 direct associations between diseases. Of those associations, 20 (17%) were absent from the searches of the PubMed database; one of these was present in the C-Model. Of the direct associations in the I-Model, 7 (35%) were absent from disease classes of HDN. Using only publicly available genetic sources we have mapped associations in GWAS to a human disease map using an in silico approach. Furthermore, we have validated this disease map using phenotypic data from our EMR. Models predicting disease associations on the basis of known genetic associations alone are specific but not sensitive. Genetic data, as it currently exists, can only explain a fraction of the risk of a disease. Our approach makes a quantitative statement about disease variation that can be explained in an EMR on the basis of genetic associations described in the GWAS. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Using Big Data to Evaluate the Association between Periodontal Disease and Rheumatoid Arthritis.
Grasso, Michael A; Comer, Angela C; DiRenzo, Dana D; Yesha, Yelena; Rishe, Naphtali D
2015-01-01
An association between periodontal disease and rheumatoid arthritis is believed to exist. Most investigations into a possible relationship have been case-control studies with relatively low sample sizes. The advent of very large clinical repositories has created new opportunities for data-driven research. We conducted a retrospective cohort study to measure the association between periodontal disease and rheumatoid arthritis in a population of 25 million patients. We demonstrated that subjects with periodontal disease were roughly 1.4 times more likely to have rheumatoid arthritis. These results compare favorably with those of previous studies on smaller cohorts. Additional work is needed to identify the mechanisms behind this association and to determine if aggressive treatment of periodontal disease can alter the course of rheumatoid arthritis.
Managing rheumatic and musculoskeletal diseases - past, present and future.
Burmester, Gerd R; Bijlsma, Johannes W J; Cutolo, Maurizio; McInnes, Iain B
2017-07-01
Progress in rheumatology has been remarkable in the past 70 years, favourably affecting quality of life for people with rheumatic and musculoskeletal diseases. Therapeutics have advanced considerably in this period, from early developments such as the introduction of glucocorticoid therapy to the general use of methotrexate and other disease-modifying agents, followed by the advent of biologic DMARDs and, most recently, small-molecule signalling inhibitors. Novel strategies for the use of such agents have also transformed outcomes, as have multidisciplinary nonpharmacological approaches to the management of rheumatic musculoskeletal disease including surgery, physical therapy and occupational therapy. Breakthroughs in our understanding of disease pathogenesis, diagnostics and the use of 'big data' continue to drive the field forward. Critically, the patient is now at the centre of management strategies as well as the future research agenda.
Kevin M. Potter; Jeanne L. Paschke; Mark O. Zweifler
2018-01-01
Insects and diseases cause changes in forest structure and function, species succession, and biodiversity, which may be considered negative or positive depending on management objectives (Edmonds and others 2011). An important task for forest managers, pathologists, and entomologists is recognizing and distinguishing between natural and excessive mortality, a task
Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551
Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.
Giobbie-Hurder, Anita; Price, Karen N; Gelber, Richard D
2009-06-01
Aromatase inhibitors provide superior disease control when compared with tamoxifen as adjuvant therapy for postmenopausal women with endocrine-responsive early breast cancer. To present the design, history, and analytic challenges of the Breast International Group (BIG) 1-98 trial: an international, multicenter, randomized, double-blind, phase-III study comparing the aromatase inhibitor letrozole with tamoxifen in this clinical setting. From 1998-2003, BIG 1-98 enrolled 8028 women to receive monotherapy with either tamoxifen or letrozole for 5 years, or sequential therapy of 2 years of one agent followed by 3 years of the other. Randomization to one of four treatment groups permitted two complementary analyses to be conducted several years apart. The first, reported in 2005, provided a head-to-head comparison of letrozole versus tamoxifen. Statistical power was increased by an enriched design, which included patients who were assigned sequential treatments until the time of the treatment switch. The second, reported in late 2008, used a conditional landmark approach to test the hypothesis that switching endocrine agents at approximately 2 years from randomization for patients who are disease-free is superior to continuing with the original agent. The 2005 analysis showed the superiority of letrozole compared with tamoxifen. The patients who were assigned tamoxifen alone were unblinded and offered the opportunity to switch to letrozole. Results from other trials increased the clinical relevance about whether or not to start treatment with letrozole or tamoxifen, and analysis plans were expanded to evaluate sequential versus single-agent strategies from randomization. Due to the unblinding of patients assigned tamoxifen alone, analysis of updated data will require ascertainment of the influence of selective crossover from tamoxifen to letrozole. BIG 1-98 is an example of an enriched design, involving complementary analyses addressing different questions several years apart, and subject to evolving analytic plans influenced by new data that emerge over time.
Neoliberal science, Chinese style: Making and managing the 'obesity epidemic'.
Greenhalgh, Susan
2016-08-01
Science and Technology Studies has seen a growing interest in the commercialization of science. In this article, I track the role of corporations in the construction of the obesity epidemic, deemed one of the major public health threats of the century. Focusing on China, a rising superpower in the midst of rampant, state-directed neoliberalization, I unravel the process, mechanisms, and broad effects of the corporate invention of an obesity epidemic. Largely hidden from view, Western firms were central actors at every stage in the creation, definition, and governmental management of obesity as a Chinese disease. Two industry-funded global health entities and the exploitation of personal ties enabled actors to nudge the development of obesity science and policy along lines beneficial to large firms, while obscuring the nudging. From Big Pharma to Big Food and Big Soda, transnational companies have been profiting from the 'epidemic of Chinese obesity', while doing little to effectively treat or prevent it. The China case suggests how obesity might have been constituted an 'epidemic threat' in other parts of the world and underscores the need for global frameworks to guide the study of neoliberal science and policymaking.
The Human Genome Project: big science transforms biology and medicine.
Hood, Leroy; Rowen, Lee
2013-01-01
The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called 'big science' - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project.
Rabaglio, M; Sun, Z; Price, K N; Castiglione-Gertsch, M; Hawle, H; Thürlimann, B; Mouridsen, H; Campone, M; Forbes, J F; Paridaens, R J; Colleoni, M; Pienkowski, T; Nogaret, J-M; Láng, I; Smith, I; Gelber, R D; Goldhirsch, A; Coates, A S
2009-09-01
To compare the incidence and timing of bone fractures in postmenopausal women treated with 5 years of adjuvant tamoxifen or letrozole for endocrine-responsive early breast cancer in the Breast International Group (BIG) 1-98 trial. We evaluated 4895 patients allocated to 5 years of letrozole or tamoxifen in the BIG 1-98 trial who received at least some study medication (median follow-up 60.3 months). Bone fracture information (grade, cause, site) was collected every 6 months during trial treatment. The incidence of bone fractures was higher among patients treated with letrozole [228 of 2448 women (9.3%)] versus tamoxifen [160 of 2447 women (6.5%)]. The wrist was the most common site of fracture in both treatment groups. Statistically significant risk factors for bone fractures during treatment included age, smoking history, osteoporosis at baseline, previous bone fracture, and previous hormone replacement therapy. Consistent with other trials comparing aromatase inhibitors to tamoxifen, letrozole was associated with an increase in bone fractures. Benefits of superior disease control associated with letrozole and lower incidence of fracture with tamoxifen should be considered with the risk profile for individual patients.
Bonilla-Claudio, Margarita; Wang, Jun; Bai, Yan; Klysik, Elzbieta; Selever, Jennifer; Martin, James F
2012-02-01
We performed an in depth analysis of Bmp4, a critical regulator of development, disease, and evolution, in cranial neural crest (CNC). Conditional Bmp4 overexpression, using a tetracycline-regulated Bmp4 gain-of-function allele, resulted in facial skeletal changes that were most dramatic after an E10.5 Bmp4 induction. Expression profiling uncovered a signature of Bmp4-induced genes (BIG) composed predominantly of transcriptional regulators that control self-renewal, osteoblast differentiation and negative Bmp autoregulation. The complimentary experiment, CNC inactivation of Bmp2, Bmp4 and Bmp7, resulted in complete or partial loss of multiple CNC-derived skeletal elements, revealing a crucial requirement for Bmp signaling in membranous bone and cartilage development. Importantly, the BIG signature was reduced in Bmp loss-of-function mutants, indicating Bmp-regulated target genes are modulated by Bmp dose. Chromatin immunoprecipitation (ChIP) revealed a subset of the BIG signature, including Satb2, Smad6, Hand1, Gadd45γ and Gata3, that was bound by Smad1/5 in the developing mandible, revealing direct Smad-mediated regulation. These data support the hypothesis that Bmp signaling regulates craniofacial skeletal development by balancing self-renewal and differentiation pathways in CNC progenitors.
The Human Genome Project: big science transforms biology and medicine
2013-01-01
The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called ‘big science’ - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project. PMID:24040834
Individual Differences in Accurately Judging Personality From Text.
Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian
2016-08-01
This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.