Sample records for virtual informatics infrastructure

  1. A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald

    2003-01-01

    Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.

  2. A National Virtual Specimen Database for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy

    2003-01-01

    Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system

  3. Assessing the current state of dental informatics in saudi arabia: the new frontier.

    PubMed

    Al-Nasser, Lubna; Al-Ehaideb, Ali; Househ, Mowafa

    2014-01-01

    Dental informatics is an emerging field that has the potential to transform the dental profession. This study aims to summarize the current applications of dental informatics in Saudi Arabia and to identify the challenges facing expansion of dental informatics in the Saudi context. Search for published articles and specialized forum entries was conducted, as well as interviews with dental professionals familiar with the topic. Results indicated that digital radiography/analysis and administrative management of dental practice are the commonest applications used. Applications in Saudi dental education included: web-based learning systems, computer-based assessments and virtual technology for clinical skills' teaching. Patients' education software, electronic dental/oral health records and the potential of dental research output from electronic databases are yet to be achieved in Saudi Arabia. Challenges facing Saudi dental informatics include: lack of IT infrastructure/support, social acceptability and financial cost. Several initiatives are taken towards the research in dental informatics. Still, more investments are needed to fully achieve the potential of various application of informatics in dental education, practice and research.

  4. A Cloud-based Infrastructure and Architecture for Environmental System Research

    NASA Astrophysics Data System (ADS)

    Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.

    2016-12-01

    The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.

  5. Reflections on biomedical informatics: from cybernetics to genomic medicine and nanomedicine.

    PubMed

    Maojo, Victor; Kulikowski, Casimir A

    2006-01-01

    Expanding on our previous analysis of Biomedical Informatics (BMI), the present perspective ranges from cybernetics to nanomedicine, based on its scientific, historical, philosophical, theoretical, experimental, and technological aspects as they affect systems developments, simulation and modelling, education, and the impact on healthcare. We then suggest that BMI is still searching for strong basic scientific principles around which it can crystallize. As -omic biological knowledge increasingly impacts the future of medicine, ubiquitous computing and informatics become even more essential, not only for the technological infrastructure, but as a part of the scientific enterprise itself. The Virtual Physiological Human and investigations into nanomedicine will surely produce yet more unpredictable opportunities, leading to significant changes in biomedical research and practice. As a discipline involved in making such advances possible, BMI is likely to need to re-define itself and extend its research horizons to meet the new challenges.

  6. Earth and Space Science Informatics: Raising Awareness of the Scientists and the Public

    NASA Astrophysics Data System (ADS)

    Messerotti, M.; Cobabe-Ammann, E.

    2009-04-01

    The recent developments in Earth and Space Science Informatics led to the availability of advanced tools for data search, visualization and analysis through e.g. the Virtual Observatories or distributed data handling infrastructures. Such facilities are accessible via web interfaces and allow refined data handling to be carried out. Notwithstanding, to date their use is not exploited by the scientific community for a variety of reasons that we will analyze in this work by considering viable strategies to overcome the issue. Similarly, such facilities are powerful tools for teaching and for popularization provided that e-learning programs involving the teachers and respectively the communicators are made available. In this context we will consider the present activities and projects by stressing the role and the legacy of the Electronic Geophysical Year.

  7. An analysis of application of health informatics in Traditional Medicine: A review of four Traditional Medicine Systems.

    PubMed

    Raja Ikram, Raja Rina; Abd Ghani, Mohd Khanapi; Abdullah, Noraswaliza

    2015-11-01

    This paper shall first investigate the informatics areas and applications of the four Traditional Medicine systems - Traditional Chinese Medicine (TCM), Ayurveda, Traditional Arabic and Islamic Medicine and Traditional Malay Medicine. Then, this paper shall examine the national informatics infrastructure initiatives in the four respective countries that support the Traditional Medicine systems. Challenges of implementing informatics in Traditional Medicine Systems shall also be discussed. The literature was sourced from four databases: Ebsco Host, IEEE Explore, Proquest and Google scholar. The search term used was "Traditional Medicine", "informatics", "informatics infrastructure", "traditional Chinese medicine", "Ayurveda", "traditional Arabic and Islamic medicine", and "traditional malay medicine". A combination of the search terms above was also executed to enhance the searching process. A search was also conducted in Google to identify miscellaneous books, publications, and organization websites using the same terms. Amongst major advancements in TCM and Ayurveda are bioinformatics, development of Traditional Medicine databases for decision system support, data mining and image processing. Traditional Chinese Medicine differentiates itself from other Traditional Medicine systems with documented ISO Standards to support the standardization of TCM. Informatics applications in Traditional Arabic and Islamic Medicine are mostly ehealth applications that focus more on spiritual healing, Islamic obligations and prophetic traditions. Literature regarding development of health informatics to support Traditional Malay Medicine is still insufficient. Major informatics infrastructure that is common in China and India are automated insurance payment systems for Traditional Medicine treatment. National informatics infrastructure in Middle East and Malaysia mainly cater for modern medicine. Other infrastructure such as telemedicine and hospital information systems focus its implementation in modern medicine or are not implemented and strategized at a national level to support Traditional Medicine. Informatics may not be able to address all the emerging areas of Traditional Medicine because the concepts in Traditional Medicine system of medicine are different from modern system, though the aim may be same, i.e., to give relief to the patient. Thus, there is a need to synthesize Traditional Medicine systems and informatics with involvements from modern system of medicine. Future research works may include filling the gaps of informatics areas and integrate national informatics infrastructure with established Traditional Medicine systems. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. From big data analysis in the cloud to robotic pot drumming: tales from the Met Office Informatics Lab

    NASA Astrophysics Data System (ADS)

    Robinson, Niall; Tomlinson, Jacob; Prudden, Rachel; Hilson, Alex; Arribas, Alberto

    2017-04-01

    The Met Office Informatics Lab is a small multidisciplinary team which sits between science, technology and design. Our mission is simply "to make Met Office data useful" - a deliberately broad objective. Our prototypes often trial cutting edge technologies, and so far have included projects such as virtual reality data visualisation in the web browser, bots and natural language interfaces, and artificially intelligent weather warnings. In this talk we focus on our latest project, Jade, a big data analysis platform in the cloud. It is a powerful, flexible and simple to use implementation which makes extensive use of technologies such as Jupyter, Dask, containerisation, Infrastructure as Code, and auto-scaling. Crucially, Jade is flexible enough to be used for a diverse set of applications: it can present weather forecast information to meteorologists and allow climate scientists to analyse big data sets, but it is also effective for analysing non-geospatial data. As well as making data useful, the Informatics Lab also trials new working practises. In this presentation, we will talk about our experience of making a group like the Lab successful.

  10. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  11. Crossing the chasm: information technology to biomedical informatics.

    PubMed

    Fahy, Brenda G; Balke, C William; Umberger, Gloria H; Talbert, Jeffery; Canales, Denise Niles; Steltenkamp, Carol L; Conigliaro, Joseph

    2011-06-01

    Accelerating the translation of new scientific discoveries to improve human health and disease management is the overall goal of a series of initiatives integrated in the National Institutes of Health (NIH) "Roadmap for Medical Research." The Clinical and Translational Science Award (CTSA) program is, arguably, the most visible component of the NIH Roadmap providing resources to institutions to transform their clinical and translational research enterprises along the goals of the Roadmap. The CTSA program emphasizes biomedical informatics as a critical component for the accomplishment of the NIH's translational objectives. To be optimally effective, emerging biomedical informatics programs must link with the information technology platforms of the enterprise clinical operations within academic health centers.This report details one academic health center's transdisciplinary initiative to create an integrated academic discipline of biomedical informatics through the development of its infrastructure for clinical and translational science infrastructure and response to the CTSA mechanism. This approach required a detailed informatics strategy to accomplish these goals. This transdisciplinary initiative was the impetus for creation of a specialized biomedical informatics core, the Center for Biomedical Informatics (CBI). Development of the CBI codified the need to incorporate medical informatics including quality and safety informatics and enterprise clinical information systems within the CBI. This article describes the steps taken to develop the biomedical informatics infrastructure, its integration with clinical systems at one academic health center, successes achieved, and barriers encountered during these efforts.

  12. Leveraging the national cyberinfrastructure for biomedical research.

    PubMed

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the 'Big Data' challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community.

  13. Leveraging the national cyberinfrastructure for biomedical research

    PubMed Central

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the ‘Big Data’ challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community. PMID:23964072

  14. Biodiversity analysis in the digital era

    PubMed Central

    2016-01-01

    This paper explores what the virtual biodiversity e-infrastructure will look like as it takes advantage of advances in ‘Big Data’ biodiversity informatics and e-research infrastructure, which allow integration of various taxon-level data types (genome, morphology, distribution and species interactions) within a phylogenetic and environmental framework. By overcoming the data scaling problem in ecology, this integrative framework will provide richer information and fast learning to enable a deeper understanding of biodiversity evolution and dynamics in a rapidly changing world. The Atlas of Living Australia is used as one example of the advantages of progressing towards this future. Living in this future will require the adoption of new ways of integrating scientific knowledge into societal decision making. This article is part of the themed issue ‘From DNA barcodes to biomes’. PMID:27481789

  15. Crossing the Chasm: Information Technology to Biomedical Informatics

    PubMed Central

    Fahy, Brenda G.; Balke, C. William; Umberger, Gloria H.; Talbert, Jeffery; Canales, Denise Niles; Steltenkamp, Carol L.; Conigliaro, Joseph

    2011-01-01

    Accelerating the translation of new scientific discoveries to improve human health and disease management is the overall goal of a series of initiatives integrated in the National Institutes of Health (NIH) “Roadmap for Medical Research.” The Clinical and Translational Research Award (CTSA) program is, arguably, the most visible component of the NIH Roadmap providing resources to institutions to transform their clinical and translational research enterprises along the goals of the Roadmap. The CTSA program emphasizes biomedical informatics as a critical component for the accomplishment of the NIH’s translational objectives. To be optimally effective, emerging biomedical informatics programs must link with the information technology (IT) platforms of the enterprise clinical operations within academic health centers. This report details one academic health center’s transdisciplinary initiative to create an integrated academic discipline of biomedical informatics through the development of its infrastructure for clinical and translational science infrastructure and response to the CTSA mechanism. This approach required a detailed informatics strategy to accomplish these goals. This transdisciplinary initiative was the impetus for creation of a specialized biomedical informatics core, the Center for Biomedical Informatics (CBI). Development of the CBI codified the need to incorporate medical informatics including quality and safety informatics and enterprise clinical information systems within the CBI. This paper describes the steps taken to develop the biomedical informatics infrastructure, its integration with clinical systems at one academic health center, successes achieved, and barriers encountered during these efforts. PMID:21383632

  16. Department of Energy's Virtual Lab Infrastructure for Integrated Earth System Science Data

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Palanisamy, G.; Shipman, G.; Boden, T.; Voyles, J.

    2014-12-01

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) Climate and Environmental Sciences Division (CESD) produces a diversity of data, information, software, and model codes across its research and informatics programs and facilities. This information includes raw and reduced observational and instrumentation data, model codes, model-generated results, and integrated data products. Currently, most of this data and information are prepared and shared for program specific activities, corresponding to CESD organization research. A major challenge facing BER CESD is how best to inventory, integrate, and deliver these vast and diverse resources for the purpose of accelerating Earth system science research. This talk provides a concept for a CESD Integrated Data Ecosystem and an initial roadmap for its implementation to address this integration challenge in the "Big Data" domain. Towards this end, a new BER Virtual Laboratory Infrastructure will be presented, which will include services and software connecting the heterogeneous CESD data holdings, and constructed with open source software based on industry standards, protocols, and state-of-the-art technology.

  17. A centralized informatics infrastructure for the National Institute on Drug Abuse Clinical Trials Network.

    PubMed

    Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F

    2009-02-01

    Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.

  18. APPLICATION OF INFORMATION AND COMMUNICATION TECHNOLOGIES IN MEDICAL EDUCATION

    PubMed Central

    Al-Tamimi, Dalal M.

    2003-01-01

    The recognition that information and communication technologies should play an increasingly important role in medical education is a key to educating physicians in the 21st century. Computer use in medical education includes, Internet hypermedia/multimedia technologies, medical informatics, distance learning and telemedicine. Adaptation to the use of these technologies should ideally start from the elementary school level. Medical schools must introduce medical informatics courses very early in the medical curriculum. Teachers will need regular CME courses to prepare and update themselves with the changing circumstances. Our infrastructure must be prepared for the new developments with computer labs, basic skill labs, close circuit television facilities, virtual class rooms, smart class rooms, simulated teaching facilities, and distance teaching by tele-techniques. Our existing manpower including, doctors, nurses, technicians, librarians, and administration personal require hands-on training, while new recruitment will have to emphasize compulsory knowledge of and familiarity with information technology. This paper highlights these subjects in detail as a means to prepare us to meet the challenges of the 21st century. PMID:23011983

  19. Medical Informatics in Academic Health Science Centers.

    ERIC Educational Resources Information Center

    Frisse, Mark E.

    1992-01-01

    An analysis of the state of medical informatics, the application of computer and information technology to biomedicine, looks at trends and concerns, including integration of traditionally distinct enterprises (clinical information systems, financial information, scholarly support activities, infrastructures); informatics career choice and…

  20. Latvian Education Informatization System LIIS

    ERIC Educational Resources Information Center

    Bicevskis, Janis; Andzans, Agnis; Ikaunieks, Evalds; Medvedis, Inga; Straujums, Uldis; Vezis, Viesturs

    2004-01-01

    The Latvian Education Informatization System LIIS project covers the whole information grid: education content, management, information services, infrastructure and user training at several levels--schools, school boards and Ministry of Education and Science. Informatization is the maintained process of creating the technical, economical and…

  1. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  2. Epilepsy informatics and an ontology-driven infrastructure for large database research and patient care in epilepsy.

    PubMed

    Sahoo, Satya S; Zhang, Guo-Qiang; Lhatoo, Samden D

    2013-08-01

    The epilepsy community increasingly recognizes the need for a modern classification system that can also be easily integrated with effective informatics tools. The 2010 reports by the United States President's Council of Advisors on Science and Technology (PCAST) identified informatics as a critical resource to improve quality of patient care, drive clinical research, and reduce the cost of health services. An effective informatics infrastructure for epilepsy, which is underpinned by a formal knowledge model or ontology, can leverage an ever increasing amount of multimodal data to improve (1) clinical decision support, (2) access to information for patients and their families, (3) easier data sharing, and (4) accelerate secondary use of clinical data. Modeling the recommendations of the International League Against Epilepsy (ILAE) classification system in the form of an epilepsy domain ontology is essential for consistent use of terminology in a variety of applications, including electronic health records systems and clinical applications. In this review, we discuss the data management issues in epilepsy and explore the benefits of an ontology-driven informatics infrastructure and its role in adoption of a "data-driven" paradigm in epilepsy research. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  3. Epilepsy informatics and an ontology-driven infrastructure for large database research and patient care in epilepsy

    PubMed Central

    Sahoo, Satya S.; Zhang, Guo-Qiang; Lhatoo, Samden D.

    2013-01-01

    Summary The epilepsy community increasingly recognizes the need for a modern classification system that can also be easily integrated with effective informatics tools. The 2010 reports by the United States President's Council of Advisors on Science and Technology (PCAST) identified informatics as a critical resource to improve quality of patient care, drive clinical research, and reduce the cost of health services. An effective informatics infrastructure for epilepsy, which is underpinned by a formal knowledge model or ontology, can leverage an ever increasing amount of multimodal data to improve (1) clinical decision support, (2) access to information for patients and their families, (3) easier data sharing, and (4) accelerate secondary use of clinical data. Modeling the recommendations of the International League Against Epilepsy (ILAE) classification system in the form of an epilepsy domain ontology is essential for consistent use of terminology in a variety of applications, including electronic health records systems and clinical applications. In this review, we discuss the data management issues in epilepsy and explore the benefits of an ontology-driven informatics infrastructure and its role in adoption of a “data-driven” paradigm in epilepsy research. PMID:23647220

  4. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  5. Innovative designs for the smart ICU: Part 3: Advanced ICU informatics.

    PubMed

    Halpern, Neil A

    2014-04-01

    This third and final installment of this series on innovative designs for the smart ICU addresses the steps involved in conceptualizing, actualizing, using, and maintaining the advanced ICU informatics infrastructure and systems. The smart ICU comprehensively and electronically integrates the patient in the ICU with all aspects of care, displays data in a variety of formats, converts data to actionable information, uses data proactively to enhance patient safety, and monitors the ICU environment to facilitate patient care and ICU management. The keys to success in this complex informatics design process include an understanding of advanced informatics concepts, sophisticated planning, installation of a robust infrastructure capable of both connectivity and interoperability, and implementation of middleware solutions that provide value. Although new technologies commonly appear compelling, they are also complicated and challenging to incorporate within existing or evolving hospital informatics systems. Therefore, careful analysis, deliberate testing, and a phased approach to the implementation of innovative technologies are necessary to achieve the multilevel solutions of the smart ICU.

  6. Data And Informatics Working Group On Virtual Data Integration Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, D. N.; Palanisamy, G.; Dam, K. K.

    2015-10-13

    This report is the outcome of a workshop that was commissioned by the Department of Energy’s Climate and Environmental Sciences Division (CESD) to examine current and future data infrastructure requirements that would be foundational to achieving CESD’s scientific mission goals. Over the past several years, data volumes in CESD disciplines have risen sharply to unprecedented levels (tens of petabytes). So too has the complexity and diversity of the research data (simulation, observation, and reanalysis) needing to be captured, stored, verified, analyzed, and integrated. With the trends of increased data volume (in the hundreds of petabytes), more complex analysis processes, andmore » growing crossdisciplinary collaborations, it is timely to investigate whether the CESD community has the right computational and data support to realize the full scientific potential from its data collections. In recognition of the challenges, a partnership is forming across CESD and with national and international agencies to investigate the viability of creating an integrated, collaborative data infrastructure: a virtual laboratory. The overarching goal of this report is to identify the community’s key data technology requirements and high-priority development needs for sustaining and growing their scientific discovery potential. The report also aims to map these requirements to existing solutions and to identify gaps in current services, tools, and infrastructure that will need to be addressed in the short, medium, and long term so as not to impede scientific progress« less

  7. Public Health, Population Health, and Epidemiology Informatics: Recent Research and Trends in the United States.

    PubMed

    Massoudi, B L; Chester, K G

    2017-08-01

    Objectives: To survey advances in public and population health and epidemiology informatics over the past 18 months. Methods: We conducted a review of English-language research works conducted in the domain of public and population health informatics and published in MEDLINE or Web of Science between January 2015 and June 2016 where information technology or informatics was a primary subject or main component of the study methodology. Selected articles were presented using a thematic analysis based on the 2011 American Medical Informatics Association (AMIA) Public Health Informatics Agenda tracks as a typology. Results: Results are given within the context developed by Dixon et al., (2015) and key themes from the 2011 AMIA Public Health Informatics Agenda. Advances are presented within a socio-technical infrastructure undergirded by a trained, competent public health workforce, systems development to meet the business needs of the practice field, and research that evaluates whether those needs are adequately met. The ability to support and grow the infrastructure depends on financial sustainability. Conclusions: The fields of public health and population health informatics continue to grow, with the most notable developments focused on surveillance, workforce development, and linking to or providing clinical services, which encompassed population health informatics advances. Very few advances addressed the need to improve communication, coordination, and consistency with the field of informatics itself, as identified in the AMIA agenda. This will likely result in the persistence of the silos of public health information systems that currently exist. Future research activities need to aim toward a holistic approach of informatics across the enterprise. Georg Thieme Verlag KG Stuttgart.

  8. Women's health nursing in the context of the National Health Information Infrastructure.

    PubMed

    Jenkins, Melinda L; Hewitt, Caroline; Bakken, Suzanne

    2006-01-01

    Nurses must be prepared to participate in the evolving National Health Information Infrastructure and the changes that will consequently occur in health care practice and documentation. Informatics technologies will be used to develop electronic health records with integrated decision support features that will likely lead to enhanced health care quality and safety. This paper provides a summary of the National Health Information Infrastructure and highlights electronic health records and decision support systems within the context of evidence-based practice. Activities at the Columbia University School of Nursing designed to prepare nurses with the necessary informatics competencies to practice in a National Health Information Infrastructure-enabled health care system are described. Data are presented from electronic (personal digital assistant) encounter logs used in our Women's Health Nurse Practitioner program to support evidence-based advanced practice nursing care. Implications for nursing practice, education, and research in the evolving National Health Information Infrastructure are discussed.

  9. Computational Science in Armenia (Invited Talk)

    NASA Astrophysics Data System (ADS)

    Marandjian, H.; Shoukourian, Yu.

    This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.

  10. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  11. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  12. The Secure Medical Research Workspace: An IT Infrastructure to Enable Secure Research on Clinical Data

    PubMed Central

    Owen, Phillips; Mostafa, Javed; Lamm, Brent; Wang, Xiaoshu; Schmitt, Charles P.; Ahalt, Stanley C.

    2013-01-01

    Abstract Clinical data have tremendous value for translational research, but only if security and privacy concerns can be addressed satisfactorily. A collaboration of clinical and informatics teams, including RENCI, NC TraCS, UNC's School of Information and Library Science, Information Technology Service's Research Computing and other partners at the University of North Carolina at Chapel Hill have developed a system called the Secure Medical Research Workspace (SMRW) that enables researchers to use clinical data securely for research. SMRW significantly minimizes the risk presented when using identified clinical data, thereby protecting patients, researchers, and institutions associated with the data. The SMRW is built on a novel combination of virtualization and data leakage protection and can be combined with other protection methodologies and scaled to production levels. PMID:23751029

  13. Designing Biomedical Informatics Infrastructure for Clinical and Translational Science

    ERIC Educational Resources Information Center

    La Paz Lillo, Ariel Isaac

    2009-01-01

    Clinical and Translational Science (CTS) rests largely on information flowing smoothly at multiple levels, in multiple directions, across multiple locations. Biomedical Informatics (BI) is seen as a backbone that helps to manage information flows for the translation of knowledge generated and stored in silos of basic science into bedside…

  14. Teacher Perception on Educational Informatics Network: A Qualitative Study of a Turkish Anatolian High School

    ERIC Educational Resources Information Center

    Karalar, Halit; Dogan, Ugur

    2017-01-01

    FATIH Project carried out by the Turkish government is one of the comprehensive technology integration project in the World. With this project, interactive boards, tablets and multifunctional printers have been distributed to schools and Internet infrastructure of schools improved. EIN (Educational Informatics Network) platform, known as EBA…

  15. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    NASA Technical Reports Server (NTRS)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  16. Digging the Virtual Past

    ERIC Educational Resources Information Center

    Polymeropoulou, Panagiota

    2014-01-01

    In this paper we will investigate the way that the technological progress and the Informatics contributed greatly to the field of Archaeology. There will be analyzed the terms of virtual archaeology and virtual reality in archaeology and there will be an extended reference to the applications and the computer graphics that archaeologists could use…

  17. A National Agenda for Public Health Informatics

    PubMed Central

    Yasnoff, William A.; Overhage, J. Marc; Humphreys, Betsy L.; LaVenture, Martin

    2001-01-01

    The AMIA 2001 Spring Congress brought together members of the the public health and informatics communities to develop a national agenda for public health informatics. Discussions of funding and governance; architecture and infrastructure; standards and vocabulary; research, evaluation, and best practices; privacy, confidentiality, and security; and training and workforce resulted in 74 recommendations with two key themes—that all stakeholders need to be engaged in coordinated activities related to public health information architecture, standards, confidentiality, best practices, and research; and that informatics training is needed throughout the public health workforce. Implementation of this consensus agenda will help promote progress in the application of information technology to improve public health. PMID:11687561

  18. Panel: Eco-informatics and decision making managing our natural resources

    USGS Publications Warehouse

    Gushing, J.B.; Wilson, T.; Martin, F.; Schnase, J.; Spengler, S.; Sugarbaker, L.; Pardo, T.

    2006-01-01

    This panel responds to the December 2004 workshop on Eco-Informatics and Decision Making [1], which addressed how informatics tools can help with better management of natural resources and policy making. The workshop was jointly sponsored by the NSF, NBII, NASA, and EPA. Workshop participants recommended that informatics research in four IT areas be funded: modeling and simulation, data quality, information integration and ontologies, and social and human aspects. Additionally, they recommend that funding agencies provide infrastructure and some changes in funding habits to assure cycles of innovation in the domain were addressed. This panel brings issues raised in that workshop to the attention of digital government researchers.

  19. The State of Information and Communication Technology and Health Informatics in Ghana

    PubMed Central

    Achampong, Emmanuel Kusi

    2012-01-01

    Information and Communication Technology (ICT) has become a major tool in delivery of health services and has had an innovative impact on quality of life. ICT is affecting the way healthcare is delivered to clients. In this paper, we discuss the state of ICT and health informatics in Ghana. We also discuss the state of various relevant infrastructures for the successful implementation of ehealth projects. We analyse the past and present state of health informatics in Ghana, in comparison to other African countries. We also review the challenges facing successful implementation of health informatics projects in Ghana and suggest possible solutions. PMID:23569633

  20. Innovative Methods in Teaching Programming for Future Informatics Teachers

    ERIC Educational Resources Information Center

    Majherová, Janka; Králík, Václav

    2017-01-01

    In the training of future informatics teachers the students obtain experience with different methods of programming. As well, the students become familiar with programming by using the robotic system Lego Mindstorms. However, the small number of Lego systems available is a limiting factor for the teaching process. Use of virtual robotic…

  1. Preparation of Speciality-Integrated Assignments in Informatics Study Courses at the Higher Education Level

    ERIC Educational Resources Information Center

    Vitinš, Maris; Rasnacs, Oskars

    2012-01-01

    Information and communications technologies today are used in virtually any university course when students prepare their papers. ICT is also needed after people are graduated from university and enter the job market. This author is an instructor in the field of informatics related to health care and social sciences at the Riga Stradins…

  2. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research

    PubMed Central

    Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.

    2010-01-01

    Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029

  3. A decade of experience in the development and implementation of tissue banking informatics tools for intra and inter-institutional translational research.

    PubMed

    Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J

    2010-08-10

    Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.

  4. Comparative Advantages in Microelectronics,

    DTIC Science & Technology

    The initial point of departure for analyzing comparative advantages in microelectronics is to make certain explicit assumptions. First, technology...changes conditions but does not determine comparative advantages . Secondly, the entire industrial infrastructure is becoming increasingly abstract...that informatics will profoundly affect the productive infrastructure and the international division of labour.

  5. Characteristics of Local Health Departments Associated with Implementation of Electronic Health Records and Other Informatics Systems.

    PubMed

    Shah, Gulzar H; Leider, Jonathon P; Castrucci, Brian C; Williams, Karmen S; Luo, Huabin

    2016-01-01

    Assessing local health departments' (LHDs') informatics capacities is important, especially within the context of broader, systems-level health reform. We assessed a nationally representative sample of LHDs' adoption of information systems and the factors associated with adoption and implementation by examining electronic health records, health information exchange, immunization registry, electronic disease reporting system, and electronic laboratory reporting. We used data from the National Association of County and City Health Officials' 2013 National Profile of LHDs. We performed descriptive statistics and multinomial logistic regression for the five implementation-oriented outcome variables of interest, with three levels of implementation (implemented, plan to implement, and no activity). Independent variables included infrastructural and financial capacity and other characteristics associated with informatics capacity. Of 505 LHDs that responded to the survey, 69 (13.5%) had implemented health information exchanges, 122 (22.2%) had implemented electronic health records, 245 (47.5%) had implemented electronic laboratory reporting, 368 (73.0%) had implemented an electronic disease reporting system, and 416 (83.8%) had implemented an immunization registry. LHD characteristics associated with health informatics adoption included provision of greater number of clinical services, greater per capita public health expenditures, health information systems specialists on staff, larger population size, decentralized governance system, one or more local boards of health, metropolitan jurisdiction, and top executive with more years in the job. Many LHDs lack health informatics capacity, particularly in smaller, rural jurisdictions. Cross-jurisdictional sharing, investment in public health informatics infrastructure, and additional training may help address these shortfalls.

  6. An informatics research agenda to support precision medicine: seven key areas.

    PubMed

    Tenenbaum, Jessica D; Avillach, Paul; Benham-Hutchins, Marge; Breitenstein, Matthew K; Crowgey, Erin L; Hoffman, Mark A; Jiang, Xia; Madhavan, Subha; Mattison, John E; Nagarajan, Radhakrishnan; Ray, Bisakha; Shin, Dmitriy; Visweswaran, Shyam; Zhao, Zhongming; Freimuth, Robert R

    2016-07-01

    The recent announcement of the Precision Medicine Initiative by President Obama has brought precision medicine (PM) to the forefront for healthcare providers, researchers, regulators, innovators, and funders alike. As technologies continue to evolve and datasets grow in magnitude, a strong computational infrastructure will be essential to realize PM's vision of improved healthcare derived from personal data. In addition, informatics research and innovation affords a tremendous opportunity to drive the science underlying PM. The informatics community must lead the development of technologies and methodologies that will increase the discovery and application of biomedical knowledge through close collaboration between researchers, clinicians, and patients. This perspective highlights seven key areas that are in need of further informatics research and innovation to support the realization of PM. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Informatics Infrastructure for the Materials Genome Initiative

    NASA Astrophysics Data System (ADS)

    Dima, Alden; Bhaskarla, Sunil; Becker, Chandler; Brady, Mary; Campbell, Carelyn; Dessauw, Philippe; Hanisch, Robert; Kattner, Ursula; Kroenlein, Kenneth; Newrock, Marcus; Peskin, Adele; Plante, Raymond; Li, Sheng-Yen; Rigodiat, Pierre-François; Amaral, Guillaume Sousa; Trautt, Zachary; Schmitt, Xavier; Warren, James; Youssef, Sharief

    2016-08-01

    A materials data infrastructure that enables the sharing and transformation of a wide range of materials data is an essential part of achieving the goals of the Materials Genome Initiative. We describe two high-level requirements of such an infrastructure as well as an emerging open-source implementation consisting of the Materials Data Curation System and the National Institute of Standards and Technology Materials Resource Registry.

  8. Mouse Genome Informatics (MGI): Resources for Mining Mouse Genetic, Genomic, and Biological Data in Support of Primary and Translational Research.

    PubMed

    Eppig, Janan T; Smith, Cynthia L; Blake, Judith A; Ringwald, Martin; Kadin, James A; Richardson, Joel E; Bult, Carol J

    2017-01-01

    The Mouse Genome Informatics (MGI), resource ( www.informatics.jax.org ) has existed for over 25 years, and over this time its data content, informatics infrastructure, and user interfaces and tools have undergone dramatic changes (Eppig et al., Mamm Genome 26:272-284, 2015). Change has been driven by scientific methodological advances, rapid improvements in computational software, growth in computer hardware capacity, and the ongoing collaborative nature of the mouse genomics community in building resources and sharing data. Here we present an overview of the current data content of MGI, describe its general organization, and provide examples using simple and complex searches, and tools for mining and retrieving sets of data.

  9. The state of medical informatics in India: a roadmap for optimal organization.

    PubMed

    Sarbadhikari, Suptendra Nath

    2005-04-01

    In India, the healthcare delivery systems are based on manual record keeping despite a good telecommunication infrastructure. Unfortunately, Indian policy makers are yet to realize the importance of medical informatics (including tele-health, which comprises e-Health and Telemedicine) in delivering healthcare. In the medical curriculum also, nowhere is this treated as a subject or even as a tool for learning. The final aim of most of the medical and paramedical students should be to become good users, and if possible, also experts for advancing medical knowledge base through medical informatics. In view of the fast changing world of medical informatics, it is essential to formulate a flexible syllabus rather than a rigid one for incorporating into the regular curriculum of medical and paramedical education. Only after that one may expect all members of the healthcare delivery systems to adopt and apply medical informatics optimally as a routine tool for their services.

  10. Transforming Clinical Imaging Data for Virtual Reality Learning Objects

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Rosset, Antoine

    2008-01-01

    Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…

  11. Infusing informatics into interprofessional education: the iTEAM (Interprofessional Technology Enhanced Advanced practice Model) project.

    PubMed

    Skiba, Diane J; Barton, Amy J; Knapfel, Sarah; Moore, Gina; Trinkley, Katy

    2014-01-01

    The iTEAM goal is to prepare advanced practice nurses, physicians and pharmacists with the interprofessional (IP) core competencies (informatics, patient centric, quality-focused, evidence based care) to provide technology enhanced collaborative care by: offering technology enhanced learning opportunities through a required informatics course, advanced practice courses (team based experiences with both standardized and virtual patients) and team based clinical experiences including e-health experiences. The innovative features of iTEAM project will be achieved through use of social media strategies, a web accessible Electronic Health Records (EHRs) system, a Virtual Clinic/Hospital in Second Life, various e-health applications including traditional telehealth tools and consumer oriented tools such as patient portals, social media consumer groups and mobile health (m-health) applications for health and wellness functions. It builds upon the schools' rich history of IP education and includes clinical partners, such as the VA and other clinical sites focused on care for underserved patient populations.

  12. How can we improve Science, Technology, Engineering, and Math education to encourage careers in Biomedical and Pathology Informatics?

    PubMed

    Uppal, Rahul; Mandava, Gunasheil; Romagnoli, Katrina M; King, Andrew J; Draper, Amie J; Handen, Adam L; Fisher, Arielle M; Becich, Michael J; Dutta-Moscato, Joyeeta

    2016-01-01

    The Computer Science, Biology, and Biomedical Informatics (CoSBBI) program was initiated in 2011 to expose the critical role of informatics in biomedicine to talented high school students.[1] By involving them in Science, Technology, Engineering, and Math (STEM) training at the high school level and providing mentorship and research opportunities throughout the formative years of their education, CoSBBI creates a research infrastructure designed to develop young informaticians. Our central premise is that the trajectory necessary to be an expert in the emerging fields of biomedical informatics and pathology informatics requires accelerated learning at an early age.In our 4(th) year of CoSBBI as a part of the University of Pittsburgh Cancer Institute (UPCI) Academy (http://www.upci.upmc.edu/summeracademy/), and our 2nd year of CoSBBI as an independent informatics-based academy, we enhanced our classroom curriculum, added hands-on computer science instruction, and expanded research projects to include clinical informatics. We also conducted a qualitative evaluation of the program to identify areas that need improvement in order to achieve our goal of creating a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics in the era of big data and personalized medicine.

  13. Informatics: essential infrastructure for quality assessment and improvement in nursing.

    PubMed Central

    Henry, S B

    1995-01-01

    In recent decades there have been major advances in the creation and implementation of information technologies and in the development of measures of health care quality. The premise of this article is that informatics provides essential infrastructure for quality assessment and improvement in nursing. In this context, the term quality assessment and improvement comprises both short-term processes such as continuous quality improvement (CQI) and long-term outcomes management. This premise is supported by 1) presentation of a historical perspective on quality assessment and improvement; 2) delineation of the types of data required for quality assessment and improvement; and 3) description of the current and potential uses of information technology in the acquisition, storage, transformation, and presentation of quality data, information, and knowledge. PMID:7614118

  14. An informatics research agenda to support precision medicine: seven key areas

    PubMed Central

    Avillach, Paul; Benham-Hutchins, Marge; Breitenstein, Matthew K; Crowgey, Erin L; Hoffman, Mark A; Jiang, Xia; Madhavan, Subha; Mattison, John E; Nagarajan, Radhakrishnan; Ray, Bisakha; Shin, Dmitriy; Visweswaran, Shyam; Zhao, Zhongming; Freimuth, Robert R

    2016-01-01

    The recent announcement of the Precision Medicine Initiative by President Obama has brought precision medicine (PM) to the forefront for healthcare providers, researchers, regulators, innovators, and funders alike. As technologies continue to evolve and datasets grow in magnitude, a strong computational infrastructure will be essential to realize PM’s vision of improved healthcare derived from personal data. In addition, informatics research and innovation affords a tremendous opportunity to drive the science underlying PM. The informatics community must lead the development of technologies and methodologies that will increase the discovery and application of biomedical knowledge through close collaboration between researchers, clinicians, and patients. This perspective highlights seven key areas that are in need of further informatics research and innovation to support the realization of PM. PMID:27107452

  15. A decadal view of biodiversity informatics: challenges and priorities

    PubMed Central

    2013-01-01

    Biodiversity informatics plays a central enabling role in the research community's efforts to address scientific conservation and sustainability issues. Great strides have been made in the past decade establishing a framework for sharing data, where taxonomy and systematics has been perceived as the most prominent discipline involved. To some extent this is inevitable, given the use of species names as the pivot around which information is organised. To address the urgent questions around conservation, land-use, environmental change, sustainability, food security and ecosystem services that are facing Governments worldwide, we need to understand how the ecosystem works. So, we need a systems approach to understanding biodiversity that moves significantly beyond taxonomy and species observations. Such an approach needs to look at the whole system to address species interactions, both with their environment and with other species. It is clear that some barriers to progress are sociological, basically persuading people to use the technological solutions that are already available. This is best addressed by developing more effective systems that deliver immediate benefit to the user, hiding the majority of the technology behind simple user interfaces. An infrastructure should be a space in which activities take place and, as such, should be effectively invisible. This community consultation paper positions the role of biodiversity informatics, for the next decade, presenting the actions needed to link the various biodiversity infrastructures invisibly and to facilitate understanding that can support both business and policy-makers. The community considers the goal in biodiversity informatics to be full integration of the biodiversity research community, including citizens’ science, through a commonly-shared, sustainable e-infrastructure across all sub-disciplines that reliably serves science and society alike. PMID:23587026

  16. A decadal view of biodiversity informatics: challenges and priorities.

    PubMed

    Hardisty, Alex; Roberts, Dave; Addink, Wouter; Aelterman, Bart; Agosti, Donat; Amaral-Zettler, Linda; Ariño, Arturo H; Arvanitidis, Christos; Backeljau, Thierry; Bailly, Nicolas; Belbin, Lee; Berendsohn, Walter; Bertrand, Nic; Caithness, Neil; Campbell, David; Cochrane, Guy; Conruyt, Noël; Culham, Alastair; Damgaard, Christian; Davies, Neil; Fady, Bruno; Faulwetter, Sarah; Feest, Alan; Field, Dawn; Garnier, Eric; Geser, Guntram; Gilbert, Jack; Grosche; Grosser, David; Hardisty, Alex; Herbinet, Bénédicte; Hobern, Donald; Jones, Andrew; de Jong, Yde; King, David; Knapp, Sandra; Koivula, Hanna; Los, Wouter; Meyer, Chris; Morris, Robert A; Morrison, Norman; Morse, David; Obst, Matthias; Pafilis, Evagelos; Page, Larry M; Page, Roderic; Pape, Thomas; Parr, Cynthia; Paton, Alan; Patterson, David; Paymal, Elisabeth; Penev, Lyubomir; Pollet, Marc; Pyle, Richard; von Raab-Straube, Eckhard; Robert, Vincent; Roberts, Dave; Robertson, Tim; Rovellotti, Olivier; Saarenmaa, Hannu; Schalk, Peter; Schaminee, Joop; Schofield, Paul; Sier, Andy; Sierra, Soraya; Smith, Vince; van Spronsen, Edwin; Thornton-Wood, Simon; van Tienderen, Peter; van Tol, Jan; Tuama, Éamonn Ó; Uetz, Peter; Vaas, Lea; Vignes Lebbe, Régine; Vision, Todd; Vu, Duong; De Wever, Aaike; White, Richard; Willis, Kathy; Young, Fiona

    2013-04-15

    Biodiversity informatics plays a central enabling role in the research community's efforts to address scientific conservation and sustainability issues. Great strides have been made in the past decade establishing a framework for sharing data, where taxonomy and systematics has been perceived as the most prominent discipline involved. To some extent this is inevitable, given the use of species names as the pivot around which information is organised. To address the urgent questions around conservation, land-use, environmental change, sustainability, food security and ecosystem services that are facing Governments worldwide, we need to understand how the ecosystem works. So, we need a systems approach to understanding biodiversity that moves significantly beyond taxonomy and species observations. Such an approach needs to look at the whole system to address species interactions, both with their environment and with other species.It is clear that some barriers to progress are sociological, basically persuading people to use the technological solutions that are already available. This is best addressed by developing more effective systems that deliver immediate benefit to the user, hiding the majority of the technology behind simple user interfaces. An infrastructure should be a space in which activities take place and, as such, should be effectively invisible.This community consultation paper positions the role of biodiversity informatics, for the next decade, presenting the actions needed to link the various biodiversity infrastructures invisibly and to facilitate understanding that can support both business and policy-makers. The community considers the goal in biodiversity informatics to be full integration of the biodiversity research community, including citizens' science, through a commonly-shared, sustainable e-infrastructure across all sub-disciplines that reliably serves science and society alike.

  17. Design of e-Science platform for biomedical imaging research cross multiple academic institutions and hospitals

    NASA Astrophysics Data System (ADS)

    Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Ling, Tonghui; Wang, Tusheng; Wang, Mingqing; Hu, Haibo; Xu, Xuemin

    2012-02-01

    More and more image informatics researchers and engineers are considering to re-construct imaging and informatics infrastructure or to build new framework to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment. In this presentation, we show an outline and our preliminary design work of building an e-Science platform for biomedical imaging and informatics research and application in Shanghai. We will present our consideration and strategy on designing this platform, and preliminary results. We also will discuss some challenges and solutions in building this platform.

  18. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  19. Capacity building in e-health and health informatics: a review of the global vision and informatics educational initiatives of the American Medical Informatics Association.

    PubMed

    Detmer, D E

    2010-01-01

    Substantial global and national commitment will be required for current healthcare systems and health professional practices to become learning care systems utilizing information and communications technology (ICT) empowered by informatics. To engage this multifaceted challenge, a vision is required that shifts the emphasis from silos of activities toward integrated systems. Successful systems will include a set of essential elements, e.g., a sufficient ICT infrastructure, evolving health care processes based on evidence and harmonized to local cultures, a fresh view toward educational preparation, sound and sustained policy support, and ongoing applied research and development. Increasingly, leaders are aware that ICT empowered by informatics must be an integral part of their national and regional visions. This paper sketches out the elements of what is needed in terms of objectives and some steps toward achieving them. It summarizes some of the progress that has been made to date by the American and International Medical Informatics Associations working separately as well as collaborating to conceptualize informatics capacity building in order to bring this vision to reality in low resource nations in particular.

  20. A primer on precision medicine informatics.

    PubMed

    Sboner, Andrea; Elemento, Olivier

    2016-01-01

    In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Audacious goals for health and biomedical informatics in the new millennium.

    PubMed

    Greenes, R A; Lorenzi, N M

    1998-01-01

    The 1998 Scientific Symposium of the American College of Medical Informatics (ACMI) was devoted to developing visions for the future of health care and biomedicine and a strategic agenda for health and biomedical informatics in support of those visions. This symposium focus was prompted by the many major changes currently underway in health care delivery, education, and research, as well as in our health and biomedical enterprises, and by the constantly increasing role of information technology in both shaping and enabling these changes. The three audacious goals developed for 2008 are a virtual health care databank, a national health care knowledge base, and a personal clinical health record.

  2. The pathology informatics curriculum wiki: Harnessing the power of user-generated content.

    PubMed

    Kim, Ji Yeon; Gudewicz, Thomas M; Dighe, Anand S; Gilbertson, John R

    2010-07-13

    The need for informatics training as part of pathology training has never been so critical, but pathology informatics is a wide and complex field and very few programs currently have the resources to provide comprehensive educational pathology informatics experiences to their residents. In this article, we present the "pathology informatics curriculum wiki", an open, on-line wiki that indexes the pathology informatics content in a larger public wiki, Wikipedia, (and other online content) and organizes it into educational modules based on the 2003 standard curriculum approved by the Association for Pathology Informatics (API). In addition to implementing the curriculum wiki at http://pathinformatics.wikispaces.com, we have evaluated pathology informatics content in Wikipedia. Of the 199 non-duplicate terms in the API curriculum, 90% have at least one associated Wikipedia article. Furthermore, evaluation of articles on a five-point Likert scale showed high scores for comprehensiveness (4.05), quality (4.08), currency (4.18), and utility for the beginner (3.85) and advanced (3.93) learners. These results are compelling and support the thesis that Wikipedia articles can be used as the foundation for a basic curriculum in pathology informatics. The pathology informatics community now has the infrastructure needed to collaboratively and openly create, maintain and distribute the pathology informatics content worldwide (Wikipedia) and also the environment (the curriculum wiki) to draw upon its own resources to index and organize this content as a sustainable basic pathology informatics educational resource. The remaining challenges are numerous, but largest by far will be to convince the pathologists to take the time and effort required to build pathology informatics content in Wikipedia and to index and organize this content for education in the curriculum wiki.

  3. The pathology informatics curriculum wiki: Harnessing the power of user-generated content

    PubMed Central

    Kim, Ji Yeon; Gudewicz, Thomas M.; Dighe, Anand S.; Gilbertson, John R.

    2010-01-01

    Background: The need for informatics training as part of pathology training has never been so critical, but pathology informatics is a wide and complex field and very few programs currently have the resources to provide comprehensive educational pathology informatics experiences to their residents. In this article, we present the “pathology informatics curriculum wiki”, an open, on-line wiki that indexes the pathology informatics content in a larger public wiki, Wikipedia, (and other online content) and organizes it into educational modules based on the 2003 standard curriculum approved by the Association for Pathology Informatics (API). Methods and Results: In addition to implementing the curriculum wiki at http://pathinformatics.wikispaces.com, we have evaluated pathology informatics content in Wikipedia. Of the 199 non-duplicate terms in the API curriculum, 90% have at least one associated Wikipedia article. Furthermore, evaluation of articles on a five-point Likert scale showed high scores for comprehensiveness (4.05), quality (4.08), currency (4.18), and utility for the beginner (3.85) and advanced (3.93) learners. These results are compelling and support the thesis that Wikipedia articles can be used as the foundation for a basic curriculum in pathology informatics. Conclusions: The pathology informatics community now has the infrastructure needed to collaboratively and openly create, maintain and distribute the pathology informatics content worldwide (Wikipedia) and also the environment (the curriculum wiki) to draw upon its own resources to index and organize this content as a sustainable basic pathology informatics educational resource. The remaining challenges are numerous, but largest by far will be to convince the pathologists to take the time and effort required to build pathology informatics content in Wikipedia and to index and organize this content for education in the curriculum wiki. PMID:20805963

  4. Crossing Borders: An Online Interdisciplinary Course in Health Informatics for Students From Two Countries.

    PubMed

    Fossum, Mariann; Fruhling, Ann; Moe, Carl Erik; Thompson, Cheryl Bagley

    2017-04-01

    A cross-countries and interprofessional novel approach for delivering an international interdisciplinary graduate health informatics course online is presented. Included in this discussion are the challenges, lessons learned, and pedagogical recommendations from the experiences of teaching the course. Four professors from three different fields and from three universities collaborated in offering an international health informatics course for an interdisciplinary group of 18 US and seven Norwegian students. Highly motivated students and professors, an online technology infrastructure that supported asynchronously communication and course delivery, the ability to adapt the curriculum to meet the pedagogy requirements at all universities, and the support of higher administration for international collaboration were enablers for success. This project demonstrated the feasibility and advantages of an interdisciplinary, interprofessional, and cross-countries approach in teaching health informatics online. Students were able to establish relationships and conduct professional conversations across disciplines and international boundaries using content management software. This graduate course can be used as a part of informatics, computer science, and/or health science programs.

  5. Supporting the Emergence of Dental Informatics with an Online Community

    PubMed Central

    Spallek, H.; Irwin, J. Y.; Schleyer, T.; Butler, B. S.; Weiss, P. M.

    2008-01-01

    Dental Informatics (DI) is the application of computer and information science to improve dental practice, research, education, and program administration. As an emerging field, dental informatics faces many challenges and barriers to establishing itself as a full-fledged discipline; these include the small number of geographically dispersed DI researchers as well as the lack of DI professional societies and DI-specific journals. E-communities have the potential to overcome these obstacles by bringing researchers together at a resources hub and giving them the ability to share information, discuss topics, and find collaborators. In this paper, we discuss our assessment of the information needs of individuals interested in DI and discuss their expectations for an e-community so that we can design an optimal electronic infrastructure for the Dental Informatics Online Community (DIOC). The 256 survey respondents indicated they prefer electronic resources over traditional print material to satisfy their information needs. The most frequently expected benefits from participation in the DIOC were general information (85% of respondents), peer networking (31.1%), and identification of potential collaborators and/or research opportunities (23.2%). We are currently building the DIOC electronic infrastructure: a searchable publication archive and the learning center have been created, and the people directory is underway. Readers are encouraged to access the DIOC Website at www.dentalinformatics.com and initiate a discussion with the authors of this paper. PMID:18271498

  6. Linking earth science informatics resources into uninterrupted digital value chains

    NASA Astrophysics Data System (ADS)

    Woodcock, Robert; Angreani, Rini; Cox, Simon; Fraser, Ryan; Golodoniuc, Pavel; Klump, Jens; Rankine, Terry; Robertson, Jess; Vote, Josh

    2015-04-01

    The CSIRO Mineral Resources Flagship was established to tackle medium- to long-term challenges facing the Australian mineral industry across the value chain from exploration and mining through mineral processing within the framework of an economically, environmentally and socially sustainable minerals industry. This broad portfolio demands collaboration and data exchange with a broad range of participants and data providers across government, research and industry. It is an ideal environment to link geoscience informatics platforms to application across the resource extraction industry and to unlock the value of data integration between traditionally discrete parts of the minerals digital value chain. Despite the potential benefits, data integration remains an elusive goal within research and industry. Many projects use only a subset of available data types in an integrated manner, often maintaining the traditional discipline-based data 'silos'. Integrating data across the entire minerals digital value chain is an expensive proposition involving multiple disciplines and, significantly, multiple data sources both internal and external to any single organisation. Differing vocabularies and data formats, along with access regimes to appropriate analysis software and equipment all hamper the sharing and exchange of information. AuScope has addressed the challenge of data exchange across organisations nationally, and established a national geosciences information infrastructure using open standards-based web services. Federated across a wide variety of organisations, the resulting infrastructure contains a wide variety of live and updated data types. The community data standards and infrastructure platforms that underpin AuScope provide important new datasets and multi-agency links independent of software and hardware differences. AuScope has thus created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. An early example of this approach is the value generated by combining geological and metallurgical data sets as part of the rapidly growing field of geometallurgy. This not only provides a far better understanding of the impact of geological variability on ore processing but also leads to new thinking on the types and characteristics of data sets collected at various stages of the exploration and mining process. The Minerals Resources Flagship is linking its research activities to the AuScope infrastructure, exploiting the technology internally to create a platform for integrated research across the minerals value chain and improved interaction with industry. Referred to as the 'Early Access Virtual Lab', the system will be fully interoperable with AuScope and international infrastructures using open standards like GeosciML. Secured access is provided to allow confidential collaboration with industry when required. This presentation will discuss how the CSIRO Mineral Resources Flagship is building on the AuScope infrastructure to transform the way that data and data products are identified, shared, integrated, and reused, to unlock the benefits of true integration of research efforts across the minerals digital value chain.

  7. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Software Engineering Infrastructure in a Large Virtual Campus

    ERIC Educational Resources Information Center

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  9. The Virtual Desktop: Options and Challenges in Selecting a Secure Desktop Infrastructure Based on Virtualization

    DTIC Science & Technology

    2011-10-01

    Fortunately, some products offer centralized management and deployment tools for local desktop implementation . Figure 5 illustrates the... implementation of a secure desktop infrastructure based on virtualization. It includes an overview of desktop virtualization, including an in-depth...environment in the data centre, whereas LHVD places it on the endpoint itself. Desktop virtualization implementation considerations and potential

  10. Foundational biomedical informatics research in the clinical and translational science era: a call to action.

    PubMed

    Payne, Philip R O; Embi, Peter J; Niland, Joyce

    2010-01-01

    Advances in clinical and translational science, along with related national-scale policy and funding mechanisms, have provided significant opportunities for the advancement of applied clinical research informatics (CRI) and translational bioinformatics (TBI). Such efforts are primarily oriented to application and infrastructure development and are critical to the conduct of clinical and translational research. However, they often come at the expense of the foundational CRI and TBI research needed to grow these important biomedical informatics subdisciplines and ensure future innovations. In light of this challenge, it is critical that a number of steps be taken, including the conduct of targeted advocacy campaigns, the development of community-accepted research agendas, and the continued creation of forums for collaboration and knowledge exchange. Such efforts are needed to ensure that the biomedical informatics community is able to advance CRI and TBI science in the context of the modern clinical and translational science era.

  11. A training network for introducing telemedicine, telecare and hospital informatics in the Adriatic-Danube-Black Sea region.

    PubMed

    Anogeianaki, Antonia; Ilonidis, George; Anogianakis, George; Lianguris, John; Katsaros, Kyriakos; Pseftogianni, Dimitra; Klisarova, Anelia; Negrev, Negrin

    2004-01-01

    DIMNET is a training mechanism for a region of central Europe. The aim is to upgrade the information technology skills of local hospital personnel and preserve their employability following the introduction of medical informatics. DIMNET uses Internet-based virtual classrooms to provide a 200-hour training course in medical informatics. Training takes place in the cities of Drama, Kavala, Xanthi and Varna. So far, more than 600 people have benefited from the programme. Initial results are encouraging. DIMNET promotes a new vocational training culture in the Balkans and is supported by local governments that perceive health-care as a fulcrum for economic development.

  12. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  13. osni.info-Using free/libre/open source software to build a virtual international community for open source nursing informatics.

    PubMed

    Oyri, Karl; Murray, Peter J

    2005-12-01

    Many health informatics organizations seem to be slow to take up the advantages of dynamic, web-based technologies for providing services to, and interaction with, their members; these are often the very technologies they promote for use within healthcare environments. This paper aims to introduce some of the many free/libre/open source (FLOSS) applications that are now available to develop interactive websites and dynamic online communities as part of the structure of health informatics organizations, and to show how the Open Source Nursing Informatics Working Group (OSNI) of the special interest group in nursing informatics of the International Medical Informatics Association (IMIA-NI) is using some of these tools to develop an online community of nurse informaticians through their website, at . Some background introduction to FLOSS applications is used for the benefit of those less familiar with such tools, and examples of some of the FLOSS content management systems (CMS) being used by OSNI are described. The experiences of the OSNI will facilitate a knowledgeable nursing contribution to the wider discussions on the applications of FLOSS within health and healthcare, and provides a model that many other groups could adopt.

  14. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    USDA-ARS?s Scientific Manuscript database

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  15. Telemedicine: The Practice of Medicine at a Distance. Resources in Technology.

    ERIC Educational Resources Information Center

    Reed, Philip A.

    2003-01-01

    Reviews developments in telemedicine and a number of related areas (telecommunications, virtual presence, informatics, artificial intelligence, robotics, materials science, and perceptual psychology). Provides learning activities for technology education. (SK)

  16. Data Management Requirements for the Rapid Identification and Character of Unknown Genomic Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenzweig, Nicole

    2010-06-02

    Nicole Rosenzweig of OptiMetrics discusses the development of informatics infrastructure for studying bacterial pathogens on June 2, 2010 at the "Sequencing, Finishing, Analysis in the Future" meeting in Santa Fe, NM.

  17. The history of pathology informatics: A global perspective

    PubMed Central

    Park, Seung; Parwani, Anil V.; Aller, Raymond D.; Banach, Lech; Becich, Michael J.; Borkenfeld, Stephan; Carter, Alexis B.; Friedman, Bruce A.; Rojo, Marcial Garcia; Georgiou, Andrew; Kayser, Gian; Kayser, Klaus; Legg, Michael; Naugler, Christopher; Sawai, Takashi; Weiner, Hal; Winsten, Dennis; Pantanowitz, Liron

    2013-01-01

    Pathology informatics has evolved to varying levels around the world. The history of pathology informatics in different countries is a tale with many dimensions. At first glance, it is the familiar story of individuals solving problems that arise in their clinical practice to enhance efficiency, better manage (e.g., digitize) laboratory information, as well as exploit emerging information technologies. Under the surface, however, lie powerful resource, regulatory, and societal forces that helped shape our discipline into what it is today. In this monograph, for the first time in the history of our discipline, we collectively perform a global review of the field of pathology informatics. In doing so, we illustrate how general far-reaching trends such as the advent of computers, the Internet and digital imaging have affected pathology informatics in the world at large. Major drivers in the field included the need for pathologists to comply with national standards for health information technology and telepathology applications to meet the scarcity of pathology services and trained people in certain countries. Following trials by a multitude of investigators, not all of them successful, it is apparent that innovation alone did not assure the success of many informatics tools and solutions. Common, ongoing barriers to the widespread adoption of informatics devices include poor information technology infrastructure in undeveloped areas, the cost of technology, and regulatory issues. This review offers a deeper understanding of how pathology informatics historically developed and provides insights into what the promising future might hold. PMID:23869286

  18. Informatics for patient safety: a nursing research perspective.

    PubMed

    Bakken, Suzanne

    2006-01-01

    In Crossing the Quality Chasm, the Institute of Medicine (IOM) Committee on Quality of Health Care in America identified the critical role of information technology in designing a health system that produces care that is "safe, effective, patient-centered, timely, efficient, and equitable" (Committee on Quality of Health Care in America, 2001, p. 164). A subsequent IOM report contends that improved information systems are essential to a new health care delivery system that "both prevents errors and learns from them when they occur" (Committee on Data Standards for Patient Safety, 2004, p. 1). This review specifically highlights the role of informatics processes and information technology in promoting patient safety and summarizes relevant nursing research. First, the components of an informatics infrastructure for patient safety are described within the context of the national framework for delivering consumer-centric and information-rich health care and using the National Health Information Infrastructure (NHII) (Thompson & Brailer, 2004). Second, relevant nursing research is summarized; this includes research studies that contributed to the development of selected infrastructure components as well as studies specifically focused on patient safety. Third, knowledge gaps and opportunities for nursing research are identified for each main topic. The health information technologies deployed as part of the national framework must support nursing practice in a manner that enables prevention of medical errors and promotion of patient safety and contributes to the development of practice-based nursing knowledge as well as best practices for patient safety. The seminal work that has been completed to date is necessary, but not sufficient, to achieve this objective.

  19. Improving usability and accessibility of cheminformatics tools for chemists through cyberinfrastructure and education.

    PubMed

    Guha, Rajarshi; Wiggins, Gary D; Wild, David J; Baik, Mu-Hyun; Pierce And, Marlon E; Fox, Geoffrey C

    Some of the latest trends in cheminformatics, computation, and the world wide web are reviewed with predictions of how these are likely to impact the field of cheminformatics in the next five years. The vision and some of the work of the Chemical Informatics and Cyberinfrastructure Collaboratory at Indiana University are described, which we base around the core concepts of e-Science and cyberinfrastructure that have proven successful in other fields. Our chemical informatics cyberinfrastructure is realized by building a flexible, generic infrastructure for cheminformatics tools and databases, exporting "best of breed" methods as easily-accessible web APIs for cheminformaticians, scientists, and researchers in other disciplines, and hosting a unique chemical informatics education program aimed at scientists and cheminformatics practitioners in academia and industry.

  20. Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project

    NASA Astrophysics Data System (ADS)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.

    2009-04-01

    During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has been evaluated through the development of prototypes of two operative applications used by the Italian Civil Protection for Wild Fires Risk Assessment (RISICO) and by the French Civil Protection for Flash Flood Risk Management (SPC-GD). The results of these studies and proof-of-concepts have been used as the basis for the definition of research and innovation strategies aiming to the detailed design and implementation of the infrastructure. In particular the main research themes and topics to be addressed have been identified and detailed. Finally the obstacles to the innovation required for the adoption of this infrastructure and possible strategies to overcome them have been discussed.

  1. Research-based Curricula in the Context of 21st Century Data Science

    NASA Astrophysics Data System (ADS)

    Fox, P. A.

    2017-12-01

    When the Informatics revolution began again a little more than 10 years ago (longer for bio-informatics) geosciences (or Earth and Space Sciences) was paying attention via international attention from the Electronic Geophysical Year (eGY) and related endeavours (IPY, IYPE, IHY). The research agenda was in the spotlight, or moreso what Earth and Space Science informatics, cast in emergent escience or cyber-infrastructures, could benefit from was the main focus of attention and funding. At the time almost all "Xinformatics" efforts were novel in their discipline or traditionally defined. However, a broader research and education agenda was clearly needed. At the same time, a much more cross-disciplinary field; data science emerged. In this presentation, we relate the development, delivery and assessment of research oriented informatics, data science and their specializations into geoscience education in generak and as undertaken at RPI over the last nine years. We conclude with a longitudinal view of the impacts on career paths in the 21st century

  2. Virtualization for the LHCb Online system

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-12-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  3. Paradigm Shift in Data Content and Informatics Infrastructure Required for Generalized Constitutive Modeling of Materials Behavior

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.

    2006-01-01

    Materials property information such as composition and thermophysical/mechanical properties abound in the literature. Oftentimes, however, the corresponding response curves from which these data are determined are missing or at the very least difficult to retrieve. Further, the paradigm for collecting materials property information has historically centered on (1) properties for materials comparison/selection purposes and (2) input requirements for conventional design/analysis methods. However, just as not all materials are alike or equal, neither are all constitutive models (and thus design/ analysis methods) equal; each model typically has its own specific and often unique required materials parameters, some directly measurable and others indirectly measurable. Therefore, the type and extent of materials information routinely collected is not always sufficient to meet the current, much less future, needs of the materials modeling community. Informatics has been defined as the science concerned with gathering, manipulating, storing, retrieving, and classifying recorded information. A key aspect of informatics is its focus on understanding problems and applying information technology as needed to address those problems. The primary objective of this article is to highlight the need for a paradigm shift in materials data collection, analysis, and dissemination so as to maximize the impact on both practitioners and researchers. Our hope is to identify and articulate what constitutes "sufficient" data content (i.e., quality and quantity) for developing, characterizing, and validating sophisticated nonlinear time- and history-dependent (hereditary) constitutive models. Likewise, the informatics infrastructure required for handling the potentially massive amounts of materials data will be discussed.

  4. Requirements for plug and play information infrastructure frameworks and architectures to enable virtual enterprises

    NASA Astrophysics Data System (ADS)

    Bolton, Richard W.; Dewey, Allen; Horstmann, Paul W.; Laurentiev, John

    1997-01-01

    This paper examines the role virtual enterprises will have in supporting future business engagements and resulting technology requirements. Two representative end-user scenarios are proposed that define the requirements for 'plug-and-play' information infrastructure frameworks and architectures necessary to enable 'virtual enterprises' in US manufacturing industries. The scenarios provide a high- level 'needs analysis' for identifying key technologies, defining a reference architecture, and developing compliant reference implementations. Virtual enterprises are short- term consortia or alliances of companies formed to address fast-changing opportunities. Members of a virtual enterprise carry out their tasks as if they all worked for a single organization under 'one roof', using 'plug-and-play' information infrastructure frameworks and architectures to access and manage all information needed to support the product cycle. 'Plug-and-play' information infrastructure frameworks and architectures are required to enhance collaboration between companies corking together on different aspects of a manufacturing process. This new form of collaborative computing will decrease cycle-time and increase responsiveness to change.

  5. Person-generated Data in Self-quantification. A Health Informatics Research Program.

    PubMed

    Gray, Kathleen; Martin-Sanchez, Fernando J; Lopez-Campos, Guillermo H; Almalki, Manal; Merolli, Mark

    2017-01-09

    The availability of internet-connected mobile, wearable and ambient consumer technologies, direct-to-consumer e-services and peer-to-peer social media sites far outstrips evidence about the efficiency, effectiveness and efficacy of using them in healthcare applications. The aim of this paper is to describe one approach to build a program of health informatics research, so as to generate rich and robust evidence about health data and information processing in self-quantification and associated healthcare and health outcomes. The paper summarises relevant health informatics research approaches in the literature and presents an example of developing a program of research in the Health and Biomedical Informatics Centre (HaBIC) at the University of Melbourne. The paper describes this program in terms of research infrastructure, conceptual models, research design, research reporting and knowledge sharing. The paper identifies key outcomes from integrative and multiple-angle approaches to investigating the management of information and data generated by use of this Centre's collection of wearable, mobiles and other devices in health self-monitoring experiments. These research results offer lessons for consumers, developers, clinical practitioners and biomedical and health informatics researchers. Health informatics is increasingly called upon to make sense of emerging self-quantification and other digital health phenomena that are well beyond the conventions of healthcare in which the field of informatics originated and consolidated. To make a substantial contribution to optimise the aims, processes and outcomes of health self-quantification needs further work at scale in multi-centre collaborations for this Centre and for health informatics researchers generally.

  6. The Future of Public Health Informatics: Alternative Scenarios and Recommended Strategies

    PubMed Central

    Edmunds, Margo; Thorpe, Lorna; Sepulveda, Martin; Bezold, Clem; Ross, David A.

    2014-01-01

    Background: In October 2013, the Public Health Informatics Institute (PHII) and Institute for Alternative Futures (IAF) convened a multidisciplinary group of experts to evaluate forces shaping public health informatics (PHI) in the United States, with the aim of identifying upcoming challenges and opportunities. The PHI workshop was funded by the Robert Wood Johnson Foundation as part of its larger strategic planning process for public health and primary care. Workshop Context: During the two-day workshop, nine experts from the public and private sectors analyzed and discussed the implications of four scenarios regarding the United States economy, health care system, information technology (IT) sector, and their potential impacts on public health in the next 10 years, by 2023. Workshop participants considered the potential role of the public health sector in addressing population health challenges in each scenario, and then identified specific informatics goals and strategies needed for the sector to succeed in this role. Recommendations and Conclusion: Participants developed recommendations for the public health informatics field and for public health overall in the coming decade. These included the need to rely more heavily on intersectoral collaborations across public and private sectors, to improve data infrastructure and workforce capacity at all levels of the public health enterprise, to expand the evidence base regarding effectiveness of informatics-based public health initiatives, and to communicate strategically with elected officials and other key stakeholders regarding the potential for informatics-based solutions to have an impact on population health. PMID:25848630

  7. National Mesothelioma Virtual Bank: a standard based biospecimen and clinical data resource to enhance translational research.

    PubMed

    Amin, Waqas; Parwani, Anil V; Schmandt, Linda; Mohanty, Sambit K; Farhat, Ghada; Pople, Andrew K; Winters, Sharon B; Whelan, Nancy B; Schneider, Althea M; Milnes, John T; Valdivieso, Federico A; Feldman, Michael; Pass, Harvey I; Dhir, Rajiv; Melamed, Jonathan; Becich, Michael J

    2008-08-13

    Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid (caBIG, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers.

  8. Informatics for practicing anatomical pathologists: marking a new era in pathology practice.

    PubMed

    Gabril, Manal Y; Yousef, George M

    2010-03-01

    Informatics can be defined as using highly advanced technologies to improve patient diagnosis or management. Pathology informatics had evolved as a response to the overwhelming amount of information that was available, in an attempt to better use and maintain them. The most commonly used tools of informatics can be classified into digital imaging, telepathology, as well as Internet and electronic data mining. Digital imaging is the storage of anatomical pathology information, either gross pictures or microscopic slides, in an electronic format. These images can be used for education, archival, diagnosis, and consultation. Virtual microscopy is the more advanced form of digital imaging with enhanced efficiency and accessibility. Telepathology is now increasingly becoming a useful tool in anatomical pathology practice. Different types of telepathology communications are available for both diagnostic and consultation services. The spectrum of applications of informatics in the field of anatomical pathology is broad and encompasses medical education, clinical services, and pathology research. Informatics is now settling on solid ground as an important tool for pathology teaching, with digital teaching becoming the standard tool in many institutions. After a slow start, we now witness the transition of informatics from the research bench to bedside. As we are moving into a new era of extensive pathology informatics utilization, several challenges have to be addressed, including the cost of the new technology, legal issues, and resistance of pathologists. It is clear from the current evidence that pathology informatics will continue to grow and have a major role in the future of our specialty. However, it is also clear that it is not going to fully replace the human factor or the regular microscope.

  9. Health informatics 3.0.

    PubMed

    Kalra, Dipak

    2011-01-01

    Web 3.0 promises us smart computer services that will interact with each other and leverage knowledge about us and our immediate context to deliver prioritised and relevant information to support decisions and actions. Healthcare must take advantage of such new knowledge-integrating services, in particular to support better co-operation between professionals of different disciplines working in different locations, and to enable well-informed co-operation between clinicians and patients. To grasp the potential of Web 3.0 we will need well-harmonised semantic resources that can richly connect virtual teams and link their strategies to real-time and tailored evidence. Facts, decision logic, care pathway steps, alerts, education need to be embedded within components that can interact with multiple EHR systems and services consistently. Using Health Informatics 3.0 a patient's current situation could be compared with the outcomes of very similar patients (from across millions) to deliver personalised care recommendations. The integration of EHRs with biomedical sciences ('omics) research results and predictive models such as the Virtual Physiological Human could help speed up the translation of new knowledge into clinical practice. The mission, and challenge, for Health Informatics 3.0 is to enable healthy citizens, patients and professionals to collaborate within a knowledge-empowered social network in which patient specific information and personalised real-time evidence are seamlessly interwoven.

  10. Implementation of utaut model to understand the use of virtual classroom principle in higher education

    NASA Astrophysics Data System (ADS)

    Aditya, B. R.; Permadi, A.

    2018-03-01

    This paper describes implementation of Unified Theory of Acceptance and User of Technology (UTAUT) model to assess the use of virtual classroom in support of teaching and learning in higher education. The purpose of this research is how virtual classroom that has fulfilled the basic principle can be accepted and used by students positively. This research methodology uses the quantitative and descriptive approach with a questionnaire as a tool for measuring the height of virtual classroom principle acception. This research uses a sample of 105 students in D3 Informatics Management at Telkom University. The result of this research is that the use of classroom virtual principle are positive and relevant to the students in higher education.

  11. Using the iPlant collaborative discovery environment.

    PubMed

    Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J

    2013-06-01

    The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.

  12. Tri-Level Optimization Algorithms for Solving Defender-Attacker-Defender Network Models

    DTIC Science & Technology

    2016-06-01

    ed.). New York: Springer. Brimberg, J., Hansen, P., Lin, K., Mladenović, N., & Breton, M. (2003). An Oil Pipeline Design Problem. Operations...H. (2012). Critical infrastructure protection: The vulnerability conundrum. Telematics and informatics , 29(1), 56–65. Retrieved from http

  13. Design and evaluation of an imaging informatics system for analytics-based decision support in radiation therapy

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi; DeMarco, John; Liu, Brent J.

    2015-03-01

    We have developed a comprehensive DICOM RT specific database of retrospective treatment planning data for radiation therapy of head and neck cancer. Further, we have designed and built an imaging informatics module that utilizes this database to perform data mining. The end-goal of this data mining system is to provide radiation therapy decision support for incoming head and neck cancer patients, by identifying best practices from previous patients who had the most similar tumor geometries. Since the performance of such systems often depends on the size and quality of the retrospective database, we have also placed an emphasis on developing infrastructure and strategies to encourage data sharing and participation from multiple institutions. The infrastructure and decision support algorithm have both been tested and evaluated with 51 sets of retrospective treatment planning data of head and neck cancer patients. We will present the overall design and architecture of our system, an overview of our decision support mechanism as well as the results of our evaluation.

  14. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes nearly 150 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies. Remote Sensing; Earth Science Informatics, Data Systems; Data Services; Metadata

  15. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  16. Discovering anomalous events from urban informatics data

    NASA Astrophysics Data System (ADS)

    Jayarajah, Kasthuri; Subbaraju, Vigneshwaran; Weerakoon, Dulanga; Misra, Archan; Tam, La Thanh; Athaide, Noel

    2017-05-01

    Singapore's "smart city" agenda is driving the government to provide public access to a broader variety of urban informatics sources, such as images from traffic cameras and information about buses servicing different bus stops. Such informatics data serves as probes of evolving conditions at different spatiotemporal scales. This paper explores how such multi-modal informatics data can be used to establish the normal operating conditions at different city locations, and then apply appropriate outlier-based analysis techniques to identify anomalous events at these selected locations. We will introduce the overall architecture of sociophysical analytics, where such infrastructural data sources can be combined with social media analytics to not only detect such anomalous events, but also localize and explain them. Using the annual Formula-1 race as our candidate event, we demonstrate a key difference between the discriminative capabilities of different sensing modes: while social media streams provide discriminative signals during or prior to the occurrence of such an event, urban informatics data can often reveal patterns that have higher persistence, including before and after the event. In particular, we shall demonstrate how combining data from (i) publicly available Tweets, (ii) crowd levels aboard buses, and (iii) traffic cameras can help identify the Formula-1 driven anomalies, across different spatiotemporal boundaries.

  17. Community annotation and bioinformatics workforce development in concert--Little Skate Genome Annotation Workshops and Jamborees.

    PubMed

    Wang, Qinghua; Arighi, Cecilia N; King, Benjamin L; Polson, Shawn W; Vincent, James; Chen, Chuming; Huang, Hongzhan; Kingham, Brewster F; Page, Shallee T; Rendino, Marc Farnum; Thomas, William Kelley; Udwary, Daniel W; Wu, Cathy H

    2012-01-01

    Recent advances in high-throughput DNA sequencing technologies have equipped biologists with a powerful new set of tools for advancing research goals. The resulting flood of sequence data has made it critically important to train the next generation of scientists to handle the inherent bioinformatic challenges. The North East Bioinformatics Collaborative (NEBC) is undertaking the genome sequencing and annotation of the little skate (Leucoraja erinacea) to promote advancement of bioinformatics infrastructure in our region, with an emphasis on practical education to create a critical mass of informatically savvy life scientists. In support of the Little Skate Genome Project, the NEBC members have developed several annotation workshops and jamborees to provide training in genome sequencing, annotation and analysis. Acting as a nexus for both curation activities and dissemination of project data, a project web portal, SkateBase (http://skatebase.org) has been developed. As a case study to illustrate effective coupling of community annotation with workforce development, we report the results of the Mitochondrial Genome Annotation Jamborees organized to annotate the first completely assembled element of the Little Skate Genome Project, as a culminating experience for participants from our three prior annotation workshops. We are applying the physical/virtual infrastructure and lessons learned from these activities to enhance and streamline the genome annotation workflow, as we look toward our continuing efforts for larger-scale functional and structural community annotation of the L. erinacea genome.

  18. Community annotation and bioinformatics workforce development in concert—Little Skate Genome Annotation Workshops and Jamborees

    PubMed Central

    Wang, Qinghua; Arighi, Cecilia N.; King, Benjamin L.; Polson, Shawn W.; Vincent, James; Chen, Chuming; Huang, Hongzhan; Kingham, Brewster F.; Page, Shallee T.; Farnum Rendino, Marc; Thomas, William Kelley; Udwary, Daniel W.; Wu, Cathy H.

    2012-01-01

    Recent advances in high-throughput DNA sequencing technologies have equipped biologists with a powerful new set of tools for advancing research goals. The resulting flood of sequence data has made it critically important to train the next generation of scientists to handle the inherent bioinformatic challenges. The North East Bioinformatics Collaborative (NEBC) is undertaking the genome sequencing and annotation of the little skate (Leucoraja erinacea) to promote advancement of bioinformatics infrastructure in our region, with an emphasis on practical education to create a critical mass of informatically savvy life scientists. In support of the Little Skate Genome Project, the NEBC members have developed several annotation workshops and jamborees to provide training in genome sequencing, annotation and analysis. Acting as a nexus for both curation activities and dissemination of project data, a project web portal, SkateBase (http://skatebase.org) has been developed. As a case study to illustrate effective coupling of community annotation with workforce development, we report the results of the Mitochondrial Genome Annotation Jamborees organized to annotate the first completely assembled element of the Little Skate Genome Project, as a culminating experience for participants from our three prior annotation workshops. We are applying the physical/virtual infrastructure and lessons learned from these activities to enhance and streamline the genome annotation workflow, as we look toward our continuing efforts for larger-scale functional and structural community annotation of the L. erinacea genome. PMID:22434832

  19. Direction of Contents Development for SMART Education

    ERIC Educational Resources Information Center

    Park, YoungSun; An, SangJin; Lee, YoungJun

    2013-01-01

    The aim of this study is to suggest a direction of developing SMART education contents for its effective implementation by analyzing the status of educational informatization policies in Korea. Korean government has built the information and communication infrastructure, and provided teachers and students with various kinds of contents. And, in…

  20. Pharmacovigilance and Biomedical Informatics: A Model for Future Development.

    PubMed

    Beninger, Paul; Ibara, Michael A

    2016-12-01

    The discipline of pharmacovigilance is rooted in the aftermath of the thalidomide tragedy of 1961. It has evolved as a result of collaborative efforts by many individuals and organizations, including physicians, patients, Health Authorities, universities, industry, the World Health Organization, the Council for International Organizations of Medical Sciences, and the International Conference on Harmonisation. Biomedical informatics is rooted in technologically based methodologies and has evolved at the speed of computer technology. The purpose of this review is to bring a novel lens to pharmacovigilance, looking at the evolution and development of the field of pharmacovigilance from the perspective of biomedical informatics, with the explicit goal of providing a foundation for discussion of the future direction of pharmacovigilance as a discipline. For this review, we searched [publication trend for the log 10 value of the numbers of publications identified in PubMed] using the key words [informatics (INF), pharmacovigilance (PV), phar-macovigilance þ informatics (PV þ INF)], for [study types] articles published between [1994-2015]. We manually searched the reference lists of identified articles for additional information. Biomedical informatics has made significant contributions to the infrastructural development of pharmacovigilance. However, there has not otherwise been a systematic assessment of the role of biomedical informatics in enhancing the field of pharmacovigilance, and there has been little cross-discipline scholarship. Rapidly developing innovations in biomedical informatics pose a challenge to pharmacovigilance in finding ways to include new sources of safety information, including social media, massively linked databases, and mobile and wearable wellness applications and sensors. With biomedical informatics as a lens, it is evident that certain aspects of pharmacovigilance are evolving more slowly. However, the high levels of mutual interest in both fields and intense global and economic external pressures offer opportunities for a future of closer collaboration. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  1. Educational Infrastructure Using Virtualization Technologies: Experience at Kaunas University of Technology

    ERIC Educational Resources Information Center

    Miseviciene, Regina; Ambraziene, Danute; Tuminauskas, Raimundas; Pažereckas, Nerijus

    2012-01-01

    Many factors influence education nowadays. Educational institutions are faced with budget cuttings, outdated IT, data security management and the willingness to integrate remote learning at home. Virtualization technologies provide innovative solutions to the problems. The paper presents an original educational infrastructure using virtualization…

  2. Public Health Platforms: An Emerging Informatics Approach to Health Professional Learning and Development

    PubMed Central

    Gray, Kathleen

    2016-01-01

    Health informatics has a major role to play in optimising the management and use of data, information and knowledge in health systems. As health systems undergo digital transformation, it is important to consider informatics approaches not only to curriculum content but also to the design of learning environments and learning activities for health professional learning and development. An example of such an informatics approach is the use of large-scale, integrated public health platforms on the Internet as part of health professional learning and development. This article describes selected examples of such platforms, with a focus on how they may influence the direction of health professional learning and development. Significance for public health The landscape of healthcare systems, public health systems, health research systems and professional education systems is fragmented, with many gaps and silos. More sophistication in the management of health data, information, and knowledge, based on public health informatics expertise, is needed to tackle key issues of prevention, promotion and policy-making. Platform technologies represent an emerging large-scale, highly integrated informatics approach to public health, combining the technologies of Internet, the web, the cloud, social technologies, remote sensing and/or mobile apps into an online infrastructure that can allow more synergies in work within and across these systems. Health professional curricula need updating so that the health workforce has a deep and critical understanding of the way that platform technologies are becoming the foundation of the health sector. PMID:27190977

  3. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  4. The biodigital human: a web-based 3D platform for medical visualization and education.

    PubMed

    Qualter, John; Sculli, Frank; Oliker, Aaron; Napier, Zachary; Lee, Sabrina; Garcia, Julio; Frenkel, Sally; Harnik, Victoria; Triola, Marc

    2012-01-01

    NYU School of Medicine's Division of Educational Informatics in collaboration with BioDigital Systems LLC (New York, NY) has created a virtual human body dataset that is being used for visualization, education and training and is accessible over modern web browsers.

  5. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  6. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  7. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.

  8. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.The talk will present an overview of current efforts in ESI, the role members of IEEE GRSS play, and discuss recent developments in data preservation and provenance.

  9. Multiplexing Low and High QoS Workloads in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan

    Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.

  10. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  11. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  12. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogeneous clinical data

    PubMed Central

    Sittig, Dean F.; Hazlehurst, Brian L.; Brown, Jeffrey; Murphy, Shawn; Rosenman, Marc; Tarczy-Hornoch, Peter; Wilcox, Adam B.

    2012-01-01

    Comparative Effectiveness Research (CER) has the potential to transform the current healthcare delivery system by identifying the most effective medical and surgical treatments, diagnostic tests, disease prevention methods and ways to deliver care for specific clinical conditions. To be successful, such research requires the identification, capture, aggregation, integration, and analysis of disparate data sources held by different institutions with diverse representations of the relevant clinical events. In an effort to address these diverse demands, there have been multiple new designs and implementations of informatics platforms that provide access to electronic clinical data and the governance infrastructure required for inter-institutional CER. The goal of this manuscript is to help investigators understand why these informatics platforms are required and to compare and contrast six, large-scale, recently funded, CER-focused informatics platform development efforts. We utilized an 8-dimension, socio-technical model of health information technology use to help guide our work. We identified six generic steps that are necessary in any distributed, multi-institutional CER project: data identification, extraction, modeling, aggregation, analysis, and dissemination. We expect that over the next several years these projects will provide answers to many important, and heretofore unanswerable, clinical research questions. PMID:22692259

  13. Cloud Based Resource for Data Hosting, Visualization and Analysis Using UCSC Cancer Genomics Browser | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The Cancer Analysis Virtual Machine (CAVM) project will leverage cloud technology, the UCSC Cancer Genomics Browser, and the Galaxy analysis workflow system to provide investigators with a flexible, scalable platform for hosting, visualizing and analyzing their own genomic data.

  14. [Information and communication technologies in nursing].

    PubMed

    Kern, Josipa

    2014-03-01

    The application of information and communication technologies (ICT) in nursing is an integral part of the educational curriculum at the university graduate level of nursing, but also part of scientific and professional meetings on nursing informatics. As part of seminars, students are obliged to choose e-health topics from their working environment, to show them and discuss with colleagues. The same is happening at meeting on nursing informatics. Selected papers on the issue are chosen to cover information literacy of nurses, examples of e-nursing, ICT infrastructure, the possible future developments and organizational aspects of e-health at healthcare institutions. Among others, special attention is paid to improving the quality of work in nursing.

  15. Biomedical informatics research network: building a national collaboratory to hasten the derivation of new understanding and treatment of disease.

    PubMed

    Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H

    2005-01-01

    Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.

  16. Network testbed creation and validation

    DOEpatents

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  17. Network testbed creation and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices,more » embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.« less

  18. Building the informatics infrastructure for comparative effectiveness research (CER): a review of the literature.

    PubMed

    Lopez, Marianne Hamilton; Holve, Erin; Sarkar, Indra Neil; Segal, Courtney

    2012-07-01

    Technological advances in clinical informatics have made large amounts of data accessible and potentially useful for research. As a result, a burgeoning literature addresses efforts to bridge the fields of health services research and biomedical informatics. The Electronic Data Methods Forum review examines peer-reviewed literature at the intersection of comparative effectiveness research and clinical informatics. The authors are specifically interested in characterizing this literature and identifying cross-cutting themes and gaps in the literature. A 3-step systematic literature search was conducted, including a structured search of PubMed, manual reviews of articles from selected publication lists, and manual reviews of research activities based on prospective electronic clinical data. Two thousand four hundred thirty-five citations were identified as potentially relevant. Ultimately, a full-text review was performed for 147 peer-reviewed papers. One hundred thirty-two articles were selected for inclusion in the review. Of these, 88 articles are the focus of the discussion in this paper. Three types of articles were identified, including papers that: (1) provide historical context or frameworks for using clinical informatics for research, (2) describe platforms and projects, and (3) discuss issues, challenges, and applications of natural language processing. In addition, 2 cross-cutting themes emerged: the challenges of conducting research in the absence of standardized ontologies and data collection; and unique data governance concerns related to the transfer, storage, deidentification, and access to electronic clinical data. Finally, the authors identified several current gaps on important topics such as the use of clinical informatics for cohort identification, cloud computing, and single point access to research data.

  19. Infrastructure Suitability Assessment Modeling for Cloud Computing Solutions

    DTIC Science & Technology

    2011-09-01

    Virtualization vs . Para-Virtualization .......................................................10 Figure 4. Modeling alternatives in relation to model...the conceptual difference between full virtualization and para-virtualization. Figure 3. Full Virtualization vs . Para-Virtualization 2. XEN...Besides Microsoft’s own client implementations, dubbed “Remote Desktop Con- nection Client” for Windows® and Apple ® operating systems, various open

  20. Meeting the challenges--the role of medical informatics in an ageing society.

    PubMed

    Koch, Sabine

    2006-01-01

    The objective of this paper is to identify trends and new technological developments that appear due to an ageing society and to relate them to current research in the field of medical informatics. A survey of the current literature reveals that recent technological advances have been made in the fields of "telecare and home-monitoring", "smart homes and robotics" and "health information systems and knowledge management". Innovative technologies such as wearable devices, bio- and environmental sensors and mobile, humanoid robots do already exist and ambient assistant living environments are being created for an ageing society. However, those technologies have to be adapted to older people's self-care processes and coping strategies, and to support new ways of healthcare delivery. Medical informatics can support this process by providing the necessary information infrastructure, contribute to standardisation, interoperability and security issues and provide modelling and simulation techniques for educational purposes. Research fields of increasing importance with regard to an ageing society are, moreover, the fields of knowledge management, ubiquitous computing and human-computer interaction.

  1. The need of a multi-actor perspective to understand expectations from virtual presence: managing elderly homecare informatics.

    PubMed

    Mettler, Tobias; Vimarlund, Vivian

    2011-12-01

    Different studies have analysed a wide range of use cases and scenarios for using IT-based services in homecare settings for elderly people. In most instances, the impact of such services has been studied using a one-dimensional approach, either focusing on the benefits for the patient or health service provider. The objective of this contribution is to explore a model for identifying and understanding outcomes of IT-based homecare services from a multi-actor perspective. In order to better understand the state of the art in homecare informatics, we conducted a literature review. We use experiences from previous research in the area of informatics to develop the proposed model. The proposed model consists of four core activities 'identify involved actors', 'understand consequences', 'clarify contingencies', 'take corrective actions', and one additional activity 'brainstorming IT use'. The primary goal of innovating organisations, processes and services in homecare informatics today, is to offer continued care, better decision support both to practitioners and patients, as well as effective distribution of resources. A multi-actor analysis perspective is needed to understand utility determination for the involved stakeholders.

  2. Nursing informatics and nursing ethics: addressing their disconnect through an enhanced TIGER-vision.

    PubMed

    Kaltoft, Mette Kjer

    2013-01-01

    All healthcare visions, including that of The TIGER (Technology-Informatics-Guiding-Educational-Reform) Initiative envisage a crucial role for nursing. However, its 7 descriptive pillars do not address the disconnect between Nursing Informatics and Nursing Ethics and their distinct communities in the clinical-disciplinary landscape. Each sees itself as providing decision support by way of information inputs and ethical insights, respectively. Both have reasons - ideological, professional, institutional - for their task construction, but this simultaneously disables each from engaging fully in the point-of-(care)-decision. Increased pressure for translating 'evidence-based' research findings into 'ethically-sound', 'value-based' and 'patient-centered' practice requires rethinking the model implicit in conventional knowledge translation and informatics practice in all disciplines, including nursing. The aim is to aid 'how nurses and other health care scientists more clearly identify clinical and other relevant data that can be captured to inform future comparative effectiveness research. 'A prescriptive, theory-based discipline of '(Nursing) Decisionics' expands the Grid for Volunteer Development of TIGER's newly launched virtual learning environment (VLE). This provides an enhanced TIGER-vision for educational reform to deliver ethically coherent, person-centered care transparently.

  3. A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control

    PubMed Central

    Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole

    2014-01-01

    Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486

  4. Architecture of a Biomedical Informatics Research Data Management Pipeline.

    PubMed

    Bauer, Christian R; Umbach, Nadine; Baum, Benjamin; Buckow, Karoline; Franke, Thomas; Grütz, Romanus; Gusky, Linda; Nussbeck, Sara Yasemin; Quade, Matthias; Rey, Sabine; Rottmann, Thorsten; Rienhoff, Otto; Sax, Ulrich

    2016-01-01

    In University Medical Centers, heterogeneous data are generated that cannot always be clearly attributed to patient care or biomedical research. Each data set has to adhere to distinct intrinsic and operational quality standards. However, only if high-quality data, tools to work with the data, and most importantly guidelines and rules of how to work with the data are addressed adequately, an infrastructure can be sustainable. Here, we present the IT Research Architecture of the University Medical Center Göttingen and describe our ten years' experience and lessons learned with infrastructures in networked medical research.

  5. Virtual Reality, Telemedicine, Web and Data Processing Innovations in Medical and Psychiatric Education and Clinical Care

    ERIC Educational Resources Information Center

    Hilty, Donald M.; Alverson, Dale C.; Alpert, Jonathan E.; Tong, Lowell; Sagduyu, Kemal; Boland, Robert J.; Mostaghimi, Arash; Leamon, Martin L.; Fidler, Don; Yellowlees, Peter M.

    2006-01-01

    Objective: This article highlights technology innovations in psychiatric and medical education, including applications from other fields. Method: The authors review the literature and poll educators and informatics faculty for novel programs relevant to psychiatric education. Results: The introduction of new technologies requires skill at…

  6. The "ICP OnLine": "Jeux sans frontieres" on the CyberCampus.

    ERIC Educational Resources Information Center

    Hutchison, Chris

    1995-01-01

    Focuses on an ICP (Inter-University Cooperation Programme) OnLine in the area of Informatics/Artificial Intelligence. Notes that ICP is accessed through the World Wide Web and was launched in the Summer of 1994 to provide "virtual mobility." Discusses the program's objectives, student experiences, and the risks and opportunities afforded by…

  7. Eco-informatics and natural resource management

    USGS Publications Warehouse

    Cushing, J.B.; Wilson, T.; Borning, A.; Delcambre, L.; Bowker, G.; Frame, M.; Schnase, J.; Sonntag, W.; Fulop, J.; Hert, C.; Hovy, E.; Jones, J.; Landis, E.; Schweik, C.; Brandt, L.; Gregg, V.; Spengler, S.

    2006-01-01

    This project highlight reports on the 2004 workshop [1], as well as follow-up activities in 2005 and 2006, regarding how informatics tools can help manage natural resources and decide policy. The workshop was sponsored jointly by sponsored by the NSF, NBII, NASA, and EPA, and attended by practitioners from government and non-government agencies, and university researchers from the computer, social, and ecological sciences. The workshop presented the significant information technology (IT) problems that resource managers face when integrating ecological or environmental information to make decisions. These IT problems fall into five categories: data presentation, data gaps, tools, indicators, and policy making and implementation. To alleviate such problems, we recommend informatics research in four IT areas, as defined in this abstract and our final report: modeling and simulation, data quality, information integration and ontologies, and social and human aspects. Additionally, we recommend that funding agencies provide infrastructure and some changes in funding habits to assure cycles of innovation in the domain were addressed. Follow-on activities to the workshop subsequent to dg.o 2005 included: an invited talk presenting workshop results at DILS 2005, publication of the workshop final report by the NBII [1], and a poster at the NBII All Hands Meeting (Oct. 2005). We also expect a special issue of the JIIS to appear in 2006 that addresses some of these questions. As we go to press, no solicitation by funding agencies has as yet been published, but various NASA and NBII, and NSF cyber-infrastructure and DG research efforts now underway address the above issues.

  8. Virtual biomedical universities and e-learning.

    PubMed

    Beux, P Le; Fieschi, M

    2007-01-01

    In this special issue on virtual biomedical universities and e-learning we will make a survey on the principal existing teaching applications of ICT used in medical Schools around the world. In the following we identify five types of research and experiments in this field of medical e-learning and virtual medical universities. The topics of this special issue goes from educational computer program to create and simulate virtual patients with a wide variety of medical conditions in different clinical settings and over different time frames to using distance learning in developed and developing countries program training medical informatics of clinicians. We also present the necessity of good indexing and research tools for training resources together with workflows to manage the multiple source content of virtual campus or universities and the virtual digital video resources. A special attention is given to training new generations of clinicians in ICT tools and methods to be used in clinical settings as well as in medical schools.

  9. What’s Past is Prologue: A Scoping Review of Recent Public Health and Global Health Informatics Literature

    PubMed Central

    Dixon, Brian E.; Pina, Jamie; Kharrazi, Hadi; Gharghabi, Fardad; Richards, Janise

    2015-01-01

    Objective: To categorize and describe the public health informatics (PHI) and global health informatics (GHI) literature between 2012 and 2014. Methods: We conducted a semi-systematic review of articles published between January 2012 and September 2014 where information and communications technologies (ICT) was a primary subject of the study or a main component of the study methodology. Additional inclusion and exclusion criteria were used to filter PHI and GHI articles from the larger biomedical informatics domain. Articles were identified using MEDLINE as well as personal bibliographies from members of the American Medical Informatics Association PHI and GHI working groups. Results: A total of 85 PHI articles and 282 GHI articles were identified. While systems in PHI continue to support surveillance activities, we identified a shift towards support for prevention, environmental health, and public health care services. Furthermore, articles from the U.S. reveal a shift towards PHI applications at state and local levels. GHI articles focused on telemedicine, mHealth and eHealth applications. The development of adequate infrastructure to support ICT remains a challenge, although we identified a small but growing set of articles that measure the impact of ICT on clinical outcomes. Discussion: There is evidence of growth with respect to both implementation of information systems within the public health enterprise as well as a widening of scope within each informatics discipline. Yet the articles also illuminate the need for more primary research studies on what works and what does not as both searches yielded small numbers of primary, empirical articles. Conclusion: While the body of knowledge around PHI and GHI continues to mature, additional studies of higher quality are needed to generate the robust evidence base needed to support continued investment in ICT by governmental health agencies. PMID:26392846

  10. Clinical research informatics and electronic health record data.

    PubMed

    Richesson, R L; Horvath, M M; Rusincovitch, S A

    2014-08-15

    The goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics. Major challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data. The use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.

  11. Clinical Research Informatics and Electronic Health Record Data

    PubMed Central

    Horvath, M. M.; Rusincovitch, S. A.

    2014-01-01

    Summary Objectives The goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics. Results Major challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data. Conclusions The use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI’s key role in the infrastructure of a learning healthcare system. PMID:25123746

  12. The Role of GIS and Data Librarians in Cyber-infrastructure Support and Governance

    NASA Astrophysics Data System (ADS)

    Branch, B. D.

    2012-12-01

    A governance road-map for cyber-infrastructure in the geosciences will include an intentional librarian core capable of technical skills that include GIS and open source support for data curation that involves all aspects of data life cycle management. Per Executive Order 12906 and other policy; spatial data, literacy, and curation are critical cyber-infrastructure needs in the near future. A formal earth science and space informatics librarian may be an outcome of such development. From e-science to e-research, STEM pipelines need librarians as critical data intermediaries in technical assistance and collaboration efforts with scientists' data and outreach needs. Future training concerns should advocate trans-disciplinary data science and policy skills that will be necessary for data management support and procurement.

  13. Pharmacogenomics and Nanotechnology Toward Advancing Personalized Medicine

    NASA Astrophysics Data System (ADS)

    Vizirianakis, Ioannis S.; Amanatiadou, Elsa P.

    The target of personalized medicine to achieve major benefits for all patients in terms of diagnosis and drug delivery can be facilitated by creating a sincere multidisciplinary information-based infrastructure in health care. To this end, nanotechnology, pharmacogenomics, and informatics can advance the utility of personalized medicine, enable clinical translation of genomic knowledge, empower healthcare environment, and finally improve clinical outcomes.

  14. Human genome program report. Part 2, 1996 research abstracts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report contains Part 2 of a two-part report to reflect research and progress in the US Department of Energy Human Genome Program from 1994 through 1996, with specified updates made just before publication. Part 2 consists of 1996 research abstracts. Attention is focused on the following: sequencing; mapping; informatics; ethical, legal, and social issues; infrastructure; and small business innovation research.

  15. Distributed medical intelligence. A systems approach for developing and integrative health care information distribution infrastructure.

    PubMed

    Warner, D; Sale, J; Viirre, E

    1996-01-01

    Recent trends in healthcare informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. Distributed Medical Intelligence promotes the development of an integrative medical communication system which addresses the process of providing expert medical knowledge to the point of need.

  16. Human Genome Program Report. Part 2, 1996 Research Abstracts

    DOE R&D Accomplishments Database

    1997-11-01

    This report contains Part 2 of a two-part report to reflect research and progress in the US Department of Energy Human Genome Program from 1994 through 1996, with specified updates made just before publication. Part 2 consists of 1996 research abstracts. Attention is focused on the following: sequencing; mapping; informatics; ethical, legal, and social issues; infrastructure; and small business innovation research.

  17. Development of an informatics infrastructure for data exchange of biomolecular simulations: architecture, data models and ontology$

    PubMed Central

    Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907

  18. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    PubMed

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  19. A case analysis of INFOMED: the Cuban national health care telecommunications network and portal.

    PubMed

    Séror, Ann C

    2006-01-27

    The Internet and telecommunications technologies contribute to national health care system infrastructures and extend global health care services markets. The Cuban national health care system offers a model to show how a national information portal can contribute to system integration, including research, education, and service delivery as well as international trade in products and services. The objectives of this paper are (1) to present the context of the Cuban national health care system since the revolution in 1959, (2) to identify virtual institutional infrastructures of the system associated with the Cuban National Health Care Telecommunications Network and Portal (INFOMED), and (3) to show how they contribute to Cuban trade in international health care service markets. Qualitative case research methods were used to identify the integrated virtual infrastructure of INFOMED and to show how it reflects socialist ideology. Virtual institutional infrastructures include electronic medical and information services and the structure of national networks linking such services. Analysis of INFOMED infrastructures shows integration of health care information, research, and education as well as the interface between Cuban national information networks and the global Internet. System control mechanisms include horizontal integration and coordination through virtual institutions linked through INFOMED, and vertical control through the Ministry of Public Health and the government hierarchy. Telecommunications technology serves as a foundation for a dual market structure differentiating domestic services from international trade. INFOMED is a model of interest for integrating health care information, research, education, and services. The virtual infrastructures linked through INFOMED support the diffusion of Cuban health care products and services in global markets. Transferability of this model is contingent upon ideology and interpretation of values such as individual intellectual property and confidentiality of individual health information. Future research should focus on examination of these issues and their consequences for global markets in health care.

  20. A Case Analysis of INFOMED: The Cuban National Health Care Telecommunications Network and Portal

    PubMed Central

    2006-01-01

    Background The Internet and telecommunications technologies contribute to national health care system infrastructures and extend global health care services markets. The Cuban national health care system offers a model to show how a national information portal can contribute to system integration, including research, education, and service delivery as well as international trade in products and services. Objective The objectives of this paper are (1) to present the context of the Cuban national health care system since the revolution in 1959, (2) to identify virtual institutional infrastructures of the system associated with the Cuban National Health Care Telecommunications Network and Portal (INFOMED), and (3) to show how they contribute to Cuban trade in international health care service markets. Methods Qualitative case research methods were used to identify the integrated virtual infrastructure of INFOMED and to show how it reflects socialist ideology. Virtual institutional infrastructures include electronic medical and information services and the structure of national networks linking such services. Results Analysis of INFOMED infrastructures shows integration of health care information, research, and education as well as the interface between Cuban national information networks and the global Internet. System control mechanisms include horizontal integration and coordination through virtual institutions linked through INFOMED, and vertical control through the Ministry of Public Health and the government hierarchy. Telecommunications technology serves as a foundation for a dual market structure differentiating domestic services from international trade. Conclusions INFOMED is a model of interest for integrating health care information, research, education, and services. The virtual infrastructures linked through INFOMED support the diffusion of Cuban health care products and services in global markets. Transferability of this model is contingent upon ideology and interpretation of values such as individual intellectual property and confidentiality of individual health information. Future research should focus on examination of these issues and their consequences for global markets in health care. PMID:16585025

  1. Technology Informatics Guiding Education Reform (TIGER). The Future of Interprofessional Education Is Here!

    PubMed

    Kuo, Ming-Chuan; Ball, Marion; Skiba, Diane J; Marin, Heimar; Shaw, Toria; Chang, Polun

    2018-01-01

    This session will describe the TIGER Initiative journey, its evolution and accomplishments nationally and internationally. A powerful demonstration of the TIGER Virtual Learning Environment (VLE) will be highlighted along with case studies from around the world, with emphasis on global competencies and opportunities for engagement in all current TIGER activities and future plans.

  2. Student Activity and Learning Outcomes in a Virtual Learning Environment

    ERIC Educational Resources Information Center

    Romanov, Kalle; Nevgi, Anne

    2008-01-01

    The aim of the study was to explore the relationship between degree of participation and learning outcomes in an e-learning course on medical informatics. Overall activity in using course materials and degree of participation in the discussion forums of an online course were studied among 39 medical students. Students were able to utilise the…

  3. 77 FR 72673 - Critical Infrastructure Protection and Resilience Month, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    .... Cyber incidents can have devastating consequences on both physical and virtual infrastructure, which is... work within existing authorities to fortify our country against cyber risks, comprehensive legislation remains essential to improving infrastructure security, enhancing cyber information sharing between...

  4. Creating technical heritage object replicas in a virtual environment

    NASA Astrophysics Data System (ADS)

    Egorova, Olga; Shcherbinin, Dmitry

    2016-03-01

    The paper presents innovative informatics methods for creating virtual technical heritage replicas, which are of significant scientific and practical importance not only to researchers but to the public in general. By performing 3D modeling and animation of aircrafts, spaceships, architectural-engineering buildings, and other technical objects, the process of learning is achieved while promoting the preservation of the replicas for future generations. Modern approaches based on the wide usage of computer technologies attract a greater number of young people to explore the history of science and technology and renew their interest in the field of mechanical engineering.

  5. Virtual reality, telemedicine, web and data processing innovations in medical and psychiatric education and clinical care.

    PubMed

    Hilty, Donald M; Alverson, Dale C; Alpert, Jonathan E; Tong, Lowell; Sagduyu, Kemal; Boland, Robert J; Mostaghimi, Arash; Leamon, Martin L; Fidler, Don; Yellowlees, Peter M

    2006-01-01

    This article highlights technology innovations in psychiatric and medical education, including applications from other fields. The authors review the literature and poll educators and informatics faculty for novel programs relevant to psychiatric education. The introduction of new technologies requires skill at implementation and evaluation to assess the pros and cons. There is a significant body of literature regarding virtual reality and simulation, including assessment of outcomes, but other innovations are not well studied. Innovations, like other uses of technology, require collaboration between parties and integration within the educational framework of an institution.

  6. Evaluation of a teaching strategy based on integration of clinical subjects, virtual autopsy, pathology museum, and digital microscopy for medical students.

    PubMed

    Diaz-Perez, Julio A; Raju, Sharat; Echeverri, Jorge H

    2014-01-01

    Learning pathology is fundamental for a successful medical practice. In recent years, medical education has undergone a profound transformation toward the development of an integrated curriculum incorporating both basic science and clinical material. Simultaneously, there has been a shift from a magisterial teaching approach to one centered around problem-based learning. Now-a-days, informatics tools are expected to help better implement these strategies. We applied and evaluated a new teaching method based on an active combination of clinical problems, gross pathology, histopathology, and autopsy pathology, all given through informatics tools, to teach a group of medical students at the Universidad de Santander, Colombia. Ninety-four medical students were followed in two consecutive semesters. Students were randomized to receive teaching either through traditional methodology or through the new integrated approach. There was no significant difference between the intervention group and the control group at baseline. At the end of the study, the scores in the intervention group were significantly higher compared to the control group (3.91/5.0 vs. 3.33/5.0, P = 0.0008). Students and tutors endorsed the benefits of the integrated approach. Participants were very satisfied with this training approach and rated the program an 8.7 out of 10, on average. This study confirms that an integrated curriculum utilizing informatics systems provides an excellent opportunity to associate pathology with clinical medicine early in training of medical students. This can be possible with the use of virtual microscopy and digital imaging.

  7. A near miss: the importance of context in a public health informatics project in a New Zealand case study.

    PubMed

    Wells, Stewart; Bullen, Chris

    2008-01-01

    This article describes the near failure of an information technology (IT) system designed to support a government-funded, primary care-based hepatitis B screening program in New Zealand. Qualitative methods were used to collect data and construct an explanatory model. Multiple incorrect assumptions were made about participants, primary care workflows and IT capacity, software vendor user knowledge, and the health IT infrastructure. Political factors delayed system development and it was implemented untested, almost failing. An intensive rescue strategy included system modifications, relaxation of data validity rules, close engagement with software vendors, and provision of intensive on-site user support. This case study demonstrates that consideration of the social, political, technological, and health care contexts is important for successful implementation of public health informatics projects.

  8. GIMI: the past, the present and the future.

    PubMed

    Simpson, Andrew; Power, David; Russell, Douglas; Slaymaker, Mark; Bailey, Vernon; Tromans, Chris; Brady, Michael; Tarassenko, Lionel

    2010-08-28

    In keeping with the theme of this year's e-Science All Hands Meeting--past, present and future--we consider the motivation for, the current status of, and the future directions for, the technologies developed within the GIMI (Generic Infrastructure for Medical Informatics) project. This analysis provides insights into how some key problems in data federation may be addressed. GIMI was funded by the UK's Technology Strategy Board with the intention of developing a service-oriented framework to facilitate the secure sharing and aggregation of heterogeneous data from disparate sources to support a range of healthcare applications. The project, which was led by the University of Oxford, involved collaboration from the National Cancer Research Institute Informatics Initiative, Loughborough University, University College London, t+ Medical, Siemens Molecular Imaging and IBM UK.

  9. Network Monitoring in the age of the Cloud

    NASA Astrophysics Data System (ADS)

    Ciuffoletti, Augusto

    Network virtualization plays a relevant role in provisioning an Infrastructure as a Service (IaaS), implementing the fabric that interconnects virtual components. We identify the standard protocol IEEE802.1Q, that describes Virtual LAN (VLAN) functionalities, as a cornerstone in this architecture.

  10. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  11. Evolution of A Distributed Live, Virtual, Constructive Environment for Human in the Loop Unmanned Aircraft Testing

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Otto, Neil M.

    2017-01-01

    NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.

  12. Evolution of A Distributed Live, Virtual, Constructive Environment for Human in the Loop Unmanned Aircraft Testing

    NASA Technical Reports Server (NTRS)

    Murphy, Jim; Otto, Neil

    2017-01-01

    NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.

  13. Tool Integration Framework for Bio-Informatics

    DTIC Science & Technology

    2007-04-01

    Java NetBeans [11] based Integrated Development Environment (IDE) for developing modules and packaging computational tools. The framework is extremely...integrate an Eclipse front-end for Desktop Integration. Eclipse was chosen over Netbeans owing to a higher acceptance, better infrastructure...5.0. This version of Dashboard ran with NetBeans IDE 3.6 requiring Java Runtime 1.4 on a machine with Windows XP. The toolchain is executed by

  14. An "integrated health neighbourhood" framework to optimise the use of EHR data.

    PubMed

    Liaw, Siaw-Teng; De Lusignan, Simon

    2016-10-04

     General practice should become the hub of integrated health neighbourhoods (IHNs), which involves sharing of information to ensure that medical homes are also part of learning organisations that use electronic health record (EHR) data for care, decision making, teaching and learning, quality improvement and research. The IHN is defined as the primary and ambulatory care services in a locality that relates largely to a single hospital-based secondary care service provider and is the logical denominator and unit of comparison for the optimal use of EHR data and health information exchange (HIE) to facilitate integration and coordination of care. Its size may vary based on the geography and requirements of the population, for example between city, suburban and rural areas. The conceptual framework includes context; integration of data, information and knowledge; integration of clinical workflow and practice; and inter-professional integration to ensure coordinated shared care to deliver safe and effective services that are equitable, accessible and culturally respectful. We illustrate how this HIE-supported IHN vision may be achieved with an Australian case study demonstrating the integration of linked pseudonymised records with knowledge- and evidence-based guidelines using semantic web tools and informatics-based methods, researching causal links bewteen data quality and quality of care and the key issues to address. The data presented in this paper form part of the evaluation of the informatics infrastructure - HIE and data repository - for its reliability and utility in supporting the IHN. An IHN can only be created if the necessary health informatics infrastructure is put in place. Integrated care may struggle to be effective without HIE.

  15. RMS: a platform for managing cross-disciplinary and multi-institutional research project collaboration.

    PubMed

    Luo, Jake; Apperson-Hansen, Carolyn; Pelfrey, Clara M; Zhang, Guo-Qiang

    2014-11-30

    Cross-institutional cross-disciplinary collaboration has become a trend as researchers move toward building more productive and innovative teams for scientific research. Research collaboration is significantly changing the organizational structure and strategies used in the clinical and translational science domain. However, due to the obstacles of diverse administrative structures, differences in area of expertise, and communication barriers, establishing and managing a cross-institutional research project is still a challenging task. We address these challenges by creating an integrated informatics platform to reduce the barriers to biomedical research collaboration. The Request Management System (RMS) is an informatics infrastructure designed to transform a patchwork of expertise and resources into an integrated support network. The RMS facilitates investigators' initiation of new collaborative projects and supports the management of the collaboration process. In RMS, experts and their knowledge areas are categorized and managed structurally to provide consistent service. A role-based collaborative workflow is tightly integrated with domain experts and services to streamline and monitor the life-cycle of a research project. The RMS has so far tracked over 1,500 investigators with over 4,800 tasks. The research network based on the data collected in RMS illustrated that the investigators' collaborative projects increased close to 3 times from 2009 to 2012. Our experience with RMS indicates that the platform reduces barriers for cross-institutional collaboration of biomedical research projects. Building a new generation of infrastructure to enhance cross-disciplinary and multi-institutional collaboration has become an important yet challenging task. In this paper, we share the experience of developing and utilizing a collaborative project management system. The results of this study demonstrate that a web-based integrated informatics platform can facilitate and increase research interactions among investigators.

  16. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  17. Crosstalk-aware virtual network embedding over inter-datacenter optical networks with few-mode fibers

    NASA Astrophysics Data System (ADS)

    Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo

    2017-12-01

    Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.

  18. Enabling Data Intensive Science through Service Oriented Science: Virtual Laboratories and Science Gateways

    NASA Astrophysics Data System (ADS)

    Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.

    2014-12-01

    We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.

  19. The Proximal Lilly Collection: Mapping, Exploring and Exploiting Feasible Chemical Space.

    PubMed

    Nicolaou, Christos A; Watson, Ian A; Hu, Hong; Wang, Jibo

    2016-07-25

    Venturing into the immensity of the small molecule universe to identify novel chemical structure is a much discussed objective of many methods proposed by the chemoinformatics community. To this end, numerous approaches using techniques from the fields of computational de novo design, virtual screening and reaction informatics, among others, have been proposed. Although in principle this objective is commendable, in practice there are several obstacles to useful exploitation of the chemical space. Prime among them are the sheer number of theoretically feasible compounds and the practical concern regarding the synthesizability of the chemical structures conceived using in silico methods. We present the Proximal Lilly Collection initiative implemented at Eli Lilly and Co. with the aims to (i) define the chemical space of small, drug-like compounds that could be synthesized using in-house resources and (ii) facilitate access to compounds in this large space for the purposes of ongoing drug discovery efforts. The implementation of PLC relies on coupling access to available synthetic knowledge and resources with chemo/reaction informatics techniques and tools developed for this purpose. We describe in detail the computational framework supporting this initiative and elaborate on the characteristics of the PLC virtual collection of compounds. As an example of the opportunities provided to drug discovery researchers by easy access to a large, realistically feasible virtual collection such as the PLC, we describe a recent application of the technology that led to the discovery of selective kinase inhibitors.

  20. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    PubMed

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.

  1. The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    NASA Astrophysics Data System (ADS)

    Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David

    2017-05-01

    We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.

  2. Selecting a Virtual World Platform for Learning

    ERIC Educational Resources Information Center

    Robbins, Russell W.; Butler, Brian S.

    2009-01-01

    Like any infrastructure technology, Virtual World (VW) platforms provide affordances that facilitate some activities and hinder others. Although it is theoretically possible for a VW platform to support all types of activities, designers make choices that lead technologies to be more or less suited for different learning objectives. Virtual World…

  3. Improving Virtual Collaborative Learning through Canonical Action Research

    ERIC Educational Resources Information Center

    Weber, Peter; Lehr, Christian; Gersch, Martin

    2014-01-01

    Virtual collaboration continues to gain in significance and is attracting attention also as virtual collaborative learning (VCL) in education. This paper addresses aspects of VCL that we identified as critical in a series of courses named "Net Economy": (1) technical infrastructure, (2) motivation and collaboration, and (3) assessment…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan M Kaplan

    This grant will be used to augment the equipment infrastructure and core support at the University of Kentucky and the University of Alabama particularly in the areas of genomics/informatics, molecular analysis and cell separation. In addition, we will promote collaborative research interactions through scientific workshops and exchange of scientists, as well as joint exploration of the role of immune receptors as targets in autoimmunity and host defense, innate and adaptive immune responses, and mucosal immunity in host defense.

  5. Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.

    PubMed

    McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha

    2012-06-01

    Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.

  6. Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries

    PubMed Central

    McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E

    2012-01-01

    Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393

  7. Nursing domain of CI governance: recommendations for health IT adoption and optimization.

    PubMed

    Collins, Sarah A; Alexander, Dana; Moss, Jacqueline

    2015-05-01

    There is a lack of recommended models for clinical informatics (CI) governance that can facilitate successful health information technology implementation. To understand existing CI governance structures and provide a model with recommended roles, partnerships, and councils based on perspectives of nursing informatics leaders. We conducted a cross-sectional study through administering a survey via telephone to facilitate semistructured interviews from June 2012 through November 2012. We interviewed 12 nursing informatics leaders, across the United States, currently serving in executive- or director-level CI roles at integrated health care systems that have pioneered electronic health records implementation projects. We found the following 4 themes emerge: (1) Interprofessional partnerships are essential. (2) Critical role-based levels of practice and competencies need to be defined. (3) Integration into existing clinical infrastructure facilitates success. (4) CI governance is an evolving process. We described specific lessons learned and a model of CI governance with recommended roles, partnerships, and councils from the perspective of nursing informatics leaders. Applied CI work is highly interprofessional with patient safety implications that heighten the need for best practice models for governance structures, adequate resource allocation, and role-based competencies. Overall, there is a notable lack of a centralized CI group comprised of formally trained informaticians to provide expertise and promote adherence to informatics principles within EHR implementation governance structures. Our model of the nursing domain of CI governance with recommended roles, partnerships, and councils provides a starting point that should be further explored and validated. Not only can the model be used to understand, shape, and standardize roles, competencies, and structures within CI practice for nursing, it can be used within other clinical domains and by other informaticians. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. The Biodiversity Informatics Potential Index

    PubMed Central

    2011-01-01

    Background Biodiversity informatics is a relatively new discipline extending computer science in the context of biodiversity data, and its development to date has not been uniform throughout the world. Digitizing effort and capacity building are costly, and ways should be found to prioritize them rationally. The proposed 'Biodiversity Informatics Potential (BIP) Index' seeks to fulfill such a prioritization role. We propose that the potential for biodiversity informatics be assessed through three concepts: (a) the intrinsic biodiversity potential (the biological richness or ecological diversity) of a country; (b) the capacity of the country to generate biodiversity data records; and (c) the availability of technical infrastructure in a country for managing and publishing such records. Methods Broadly, the techniques used to construct the BIP Index were rank correlation, multiple regression analysis, principal components analysis and optimization by linear programming. We built the BIP Index by finding a parsimonious set of country-level human, economic and environmental variables that best predicted the availability of primary biodiversity data accessible through the Global Biodiversity Information Facility (GBIF) network, and constructing an optimized model with these variables. The model was then applied to all countries for which sufficient data existed, to obtain a score for each country. Countries were ranked according to that score. Results Many of the current GBIF participants ranked highly in the BIP Index, although some of them seemed not to have realized their biodiversity informatics potential. The BIP Index attributed low ranking to most non-participant countries; however, a few of them scored highly, suggesting that these would be high-return new participants if encouraged to contribute towards the GBIF mission of free and open access to biodiversity data. Conclusions The BIP Index could potentially help in (a) identifying countries most likely to contribute to filling gaps in digitized biodiversity data; (b) assisting countries potentially in need (for example mega-diverse) to mobilize resources and collect data that could be used in decision-making; and (c) allowing identification of which biodiversity informatics-resourced countries could afford to assist countries lacking in biodiversity informatics capacity, and which data-rich countries should benefit most from such help. PMID:22373233

  9. Brokered virtual hubs for facilitating access and use of geospatial Open Data

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Latre, Miguel; Kamali, Nargess; Brumana, Raffaella; Braumann, Stefan; Nativi, Stefano

    2016-04-01

    Open Data is a major trend in current information technology scenario and it is often publicised as one of the pillars of the information society in the near future. In particular, geospatial Open Data have a huge potential also for Earth Sciences, through the enablement of innovative applications and services integrating heterogeneous information. However, open does not mean usable. As it was recognized at the very beginning of the Web revolution, many different degrees of openness exist: from simple sharing in a proprietary format to advanced sharing in standard formats and including semantic information. Therefore, to fully unleash the potential of geospatial Open Data, advanced infrastructures are needed to increase the data openness degree, enhancing their usability. In October 2014, the ENERGIC OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". The ENERGIC OD Virtual Hubs aim to facilitate the use of geospatial Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC OD integrates several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. A first version of the ENERGIC OD brokers has been implemented based on the GI-Suite Brokering Framework developed by CNR-IIA, and complemented with other tools under integration and development. It already enables mediated discovery and harmonized access to different geospatial Open Data sources. It is accessible by users as Software-as-a-Service through a browser. Moreover, open APIs and a Javascript library are available for application developers. Six ENERGIC OD Virtual Hubs have been currently deployed: one at regional level (Berlin metropolitan area) and five at national-level (in France, Germany, Italy, Poland and Spain). Each Virtual Hub manager decided the deployment strategy (local infrastructure or commercial Infrastructure-as-a-Service cloud), and the list of connected Open Data sources. The ENERGIC OD Virtual Hubs are under test and validation through the development of ten different mobile and Web applications.

  10. The DADDI Project: Delivering a Working Prototype for Arctic Coastal Data

    NASA Astrophysics Data System (ADS)

    Wilson, B. E.; Parsons, M. A.; Palanisamy, G.

    2006-12-01

    A key element for the ultimate success of the International Polar Year (IPY) effort will be our ability to make the volumes of data collected in this work available and usable to researchers, both now and into the future. Ultimately, the IPY data will reside in a number of different repositories and will be accessed by users from a wide variety of disciplines and with a wide variety of needs. It is therefore important that appropriate informatics tools be developed and made available to the IPY community for indexing, searching, retrieving, and managing distributed polar data. Discovery, Access, and Delivery of Data for the IPY (DADDI) is a NASA-funded project involving multiple institutions, targeted at leveraging and evolving Earth Science informatics tools to meet the Informatics challenges of the IPY effort. To test our approaches, we have selected Arctic coastal data as a focus area for developing a working prototype of an IPY Informatics solution. Coastal areas are undergoing some of the most drastic changes within the polar regions and are also the area of most concentrated human activity at high latitudes. Coastal regions are also of interest to a broad range of disciplines and data customers, so this is an area where there is a high need for a robust Informatics infrastructure. In this presentation, I will review the requirements which we have collected for an information system to manage a dispersed collection of Arctic coastal data. I will then present the current version of the prototype which we are developing, discuss the ways in which the underlying tools can be leveraged out to other IPY- related areas, and discuss the lessons learned in developing this prototype information system.

  11. Achievable steps toward building a National Health Information infrastructure in the United States.

    PubMed

    Stead, William W; Kelly, Brian J; Kolodner, Robert M

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative.

  12. Achievable Steps Toward Building a National Health Information Infrastructure in the United States

    PubMed Central

    Stead, William W.; Kelly, Brian J.; Kolodner, Robert M.

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative. PMID:15561783

  13. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  14. Consumer-mediated health information exchanges: the 2012 ACMI debate.

    PubMed

    Cimino, James J; Frisse, Mark E; Halamka, John; Sweeney, Latanya; Yasnoff, William

    2014-04-01

    The American College of Medical Informatics (ACMI) sponsors periodic debates during the American Medical Informatics Fall Symposium to highlight important informatics issues of broad interest. In 2012, a panel debated the following topic: "Resolved: Health Information Exchange Organizations Should Shift Their Principal Focus to Consumer-Mediated Exchange in Order to Facilitate the Rapid Development of Effective, Scalable, and Sustainable Health Information Infrastructure." Those supporting the proposition emphasized the need for consumer-controlled community repositories of electronic health records (health record banks) to address privacy, stakeholder cooperation, scalability, and sustainability. Those opposing the proposition emphasized that the current healthcare environment is so complex that development of consumer control will take time and that even then, consumers may not be able to mediate their information effectively. While privately each discussant recognizes that there are many sides to this complex issue, each followed the debater's tradition of taking an extreme position in order emphasize some of the polarizing aspects in the short time allotted them. In preparing this summary, we sought to convey the substance and spirit of the debate in printed form. Transcripts of the actual debate were edited for clarity, and appropriate supporting citations were added for the further edification of the reader. Published by Elsevier Inc.

  15. Enabling Long-Term Oceanographic Research: Changing Data Practices, Information Management Strategies and Informatics

    NASA Astrophysics Data System (ADS)

    Baker, K. S.; Chandler, C. L.

    2008-12-01

    Data management and informatics research are in a state of change in terms of data practices, information strategies, and roles. New ways of thinking about data and data management can facilitate interdisciplinary global ocean science. To meet contemporary expectations for local data use and reuse by a variety of audiences, collaborative strategies involving diverse teams of information professionals are developing. Such changes are fostering the growth of information infrastructures that support multi-scale sampling, data integration, and nascent networks of data repositories. In this retrospective, two examples of oceanographic projects incorporating data management in partnership with long-term science programs are reviewed: the Palmer Station Long-Term Ecological Research program (Palmer LTER) and the United States Joint Global Ocean Flux Study (US JGOFS). Lessons learned - short-term and long-term - from a decade of data management within these two communities will be presented. A conceptual framework called Ocean Informatics provides one example for managing the complexities inherent to sharing oceanographic data. Elements are discussed that address the economies-of-scale as well as the complexities-of-scale pertinent to a broad vision of information management and scientific research.

  16. ESIP Federation: A Case Study on Enabling Collaboration Infrastructure to Support Earth Science Informatics Communities

    NASA Astrophysics Data System (ADS)

    Robinson, E.; Meyer, C. B.; Benedict, K. K.

    2013-12-01

    A critical part of effective Earth science data and information system interoperability involves collaboration across geographically and temporally distributed communities. The Federation of Earth Science Information Partners (ESIP) is a broad-based, distributed community of science, data and information technology practitioners from across science domains, economic sectors and the data lifecycle. ESIP's open, participatory structure provides a melting pot for coordinating around common areas of interest, experimenting on innovative ideas and capturing and finding best practices and lessons learned from across the network. Since much of ESIP's work is distributed, the Foundation for Earth Science was established as a non-profit home for its supportive collaboration infrastructure. The infrastructure leverages the Internet and recent advances in collaboration web services. ESIP provides neutral space for self-governed groups to emerge around common Earth science data and information issues, ebbing and flowing as the need for them arises. As a group emerges, the Foundation quickly equips the virtual workgroup with a set of ';commodity services'. These services include: web meeting technology (Webex), a wiki and an email listserv. WebEx allows the group to work synchronously, dynamically viewing and discussing shared information in real time. The wiki is the group's primary workspace and over time creates organizational memory. The listserv provides an inclusive way to email the group and archive all messages for future reference. These three services lower the startup barrier for collaboration and enable automatic content preservation to allow for future work. While many of ESIP's consensus-building activities are discussion-based, the Foundation supports an ESIP testbed environment for exploring and evaluating prototype standards, services, protocols, and best practices. After community review of testbed proposals, the Foundation provides small seed funding and a toolbox of collaborative development resources including Amazon Web Services to quickly spin-up the testbed instance and a GitHub account for maintaining testbed project code enabling reuse. Recently, the Foundation supported development of the ESIP Commons (http://commons.esipfed.org), a Drupal-based knowledge repository for non-traditional publications to preserve community products and outcomes like white papers, posters and proceedings. The ESIP Commons adds additional structured metadata, provides attribution to contributors and allows those unfamiliar with ESIP a straightforward way to find information. The success of ESIP Federation activities is difficult to measure. The ESIP Commons is a step toward quantifying sponsor return on investment and is one dataset used in network map analysis of the ESIP community network, another success metric. Over the last 15 years, ESIP has continually grown and attracted experts in the Earth science data and informatics field becoming a primary locus of research and development on the application and evolution of Earth science data standards and conventions. As funding agencies push toward a more collaborative approach, the lessons learned from ESIP and the collaboration services themselves are a crucial component of supporting science research.

  17. 10 Myths of Virtualization

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2012-01-01

    Half of servers in higher ed are virtualized. But that number's not high enough for Link Alander, interim vice chancellor and CIO at the Lone Star College System (Texas). He aspires to see 100 percent of the system's infrastructure requirements delivered as IT services from its own virtualized data centers or other cloud-based operators. Back in…

  18. [caCORE: core architecture of bioinformation on cancer research in America].

    PubMed

    Gao, Qin; Zhang, Yan-lei; Xie, Zhi-yun; Zhang, Qi-peng; Hu, Zhang-zhi

    2006-04-18

    A critical factor in the advancement of biomedical research is the ease with which data can be integrated, redistributed and analyzed both within and across domains. This paper summarizes the Biomedical Information Core Infrastructure built by National Cancer Institute Center for Bioinformatics in America (NCICB). The main product from the Core Infrastructure is caCORE--cancer Common Ontologic Reference Environment, which is the infrastructure backbone supporting data management and application development at NCICB. The paper explains the structure and function of caCORE: (1) Enterprise Vocabulary Services (EVS). They provide controlled vocabulary, dictionary and thesaurus services, and EVS produces the NCI Thesaurus and the NCI Metathesaurus; (2) The Cancer Data Standards Repository (caDSR). It provides a metadata registry for common data elements. (3) Cancer Bioinformatics Infrastructure Objects (caBIO). They provide Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. The vision for caCORE is to provide a common data management framework that will support the consistency, clarity, and comparability of biomedical research data and information. In addition to providing facilities for data management and redistribution, caCORE helps solve problems of data integration. All NCICB-developed caCORE components are distributed under open-source licenses that support unrestricted usage by both non-profit and commercial entities, and caCORE has laid the foundation for a number of scientific and clinical applications. Based on it, the paper expounds caCORE-base applications simply in several NCI projects, of which one is CMAP (Cancer Molecular Analysis Project), and the other is caBIG (Cancer Biomedical Informatics Grid). In the end, the paper also gives good prospects of caCORE, and while caCORE was born out of the needs of the cancer research community, it is intended to serve as a general resource. Cancer research has historically contributed to many areas beyond tumor biology. At the same time, the paper makes some suggestions about the study at the present time on biomedical informatics in China.

  19. Nursing Portal; a Nursing Informatics Solution for Iran, Lessons Learned from a Comparative Study

    PubMed Central

    Safdari, Reza; Masoori, Niloufar; Torabi, Mashaallah; Cheraghi, Mohammad A.; farzananejad, Ahmadreza; Azadmanjir, Zahra

    2012-01-01

    The nursing portal is an informatics solution in which services and capabilities supports the nursing staff in their practices and professional development with respect to the existing challenges for use of Internet by nurses at work. It can be considered as a creditable gateway for quick access to research-based evidence provided by reliable resources. Also it provide interactive virtual environment for knowledge exchange with experts or colleagues in different geographical area. Through a comparative study on specialized nursing portals in Iran and other three countries, the aim of this paper is defining desired content and structural specifications of nursing portals which support the practice of nurses in the workplace. Based on results of the present study, a set of recommendations provide for development of a comprehensive nursing portal in Iran. PMID:24199117

  20. Nursing portal; a nursing informatics solution for iran, lessons learned from a comparative study.

    PubMed

    Safdari, Reza; Masoori, Niloufar; Torabi, Mashaallah; Cheraghi, Mohammad A; Farzananejad, Ahmadreza; Azadmanjir, Zahra

    2012-01-01

    The nursing portal is an informatics solution in which services and capabilities supports the nursing staff in their practices and professional development with respect to the existing challenges for use of Internet by nurses at work. It can be considered as a creditable gateway for quick access to research-based evidence provided by reliable resources. Also it provide interactive virtual environment for knowledge exchange with experts or colleagues in different geographical area. Through a comparative study on specialized nursing portals in Iran and other three countries, the aim of this paper is defining desired content and structural specifications of nursing portals which support the practice of nurses in the workplace. Based on results of the present study, a set of recommendations provide for development of a comprehensive nursing portal in Iran.

  1. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud

    PubMed Central

    Karimi, Kamran; Vize, Peter D.

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782

  2. Informatics in clinical research in oncology: current state, challenges, and a future perspective.

    PubMed

    Chahal, Amar P S

    2011-01-01

    The informatics landscape of clinical trials in oncology has changed significantly in the last 10 years. The current state of the infrastructure for clinical trial management, execution, and data management is reviewed. The systems, their functionality, the users, and the standards available to researchers are discussed from the perspective of the oncologist-researcher. Challenges in complexity and in the processing of information are outlined. These challenges include the lack of communication and information-interchange between systems, the lack of simplified standards, and the lack of implementation and adherence to the standards that are available. The clinical toxicology criteria from the National Cancer Institute (CTCAE) are cited as a successful standard in oncology, and HTTP on the Internet is referenced for its simplicity. Differences in the management of information standards between industries are discussed. Possible future advances in oncology clinical research informatics are addressed. These advances include strategic policy review of standards and the implementation of actions to make standards free, ubiquitous, simple, and easily interpretable; the need to change from a local data-capture- or transaction-driven model to a large-scale data-interpretation model that provides higher value to the oncologist and the patient; and the need for information technology investment in a readily available digital educational model for clinical research in oncology that is customizable for individual studies. These new approaches, with changes in information delivery to mobile platforms, will set the stage for the next decade in clinical research informatics.

  3. Scaling the CERN OpenStack cloud

    NASA Astrophysics Data System (ADS)

    Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.

    2015-12-01

    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.

  4. Teaching undergraduate nursing students critical thinking: An innovative informatics strategy.

    PubMed

    Warren, Judith J; Connors, Helen R; Weaver, Charlotte; Simpson, Roy

    2006-01-01

    Simulated e-Health Delivery System (SEEDS) uses a clinical information system (CIS) to teach students how to process data from virtual patient case studies and work with information technology. SEEDS was developed in response to the Institute of Medicine recommendation that students be taught about information systems in order to improve quality patient care and reduce errors. Curriculum implications, implementation of the system, and technology challenges are discussed.

  5. Computational Toxicology at the US EPA | Science Inventory ...

    EPA Pesticide Factsheets

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t

  6. [The informatics: a remarkable tool for teaching general internal medicine].

    PubMed

    Ombelli, Julien; Pasche, Olivier; Sohrmann, Marc; Monti, Matteo

    2015-05-13

    INTERMED training implies a three week course, integrated in the "primary care module" for medical students in the first master year at the school of medicine in Lausanne. INTERMED uses an innovative teaching method based on repetitive sequences of e-learning-based individual learning followed by collaborative learning activities in teams, named Team-based learning (TBL). The e-learning takes place in a web-based virtual learning environment using a series of interactive multimedia virtual patients. By using INTERMED students go through a complete medical encounter applying clinical reasoning and choosing the diagnostic and therapeutic approach. INTERMED offers an authentic experience in an engaging and safe environment where errors are allowed and without consequences.

  7. Transportation Infrastructure Design and Construction \\0x16 Virtual Training Tools

    DOT National Transportation Integrated Search

    2003-09-01

    This project will develop 3D interactive computer-training environments for a major element of transportation infrastructure : hot mix asphalt paving. These tools will include elements of hot mix design (including laboratory equipment) and constructi...

  8. The Learning Healthcare System and Cardiovascular Care: A Scientific Statement From the American Heart Association.

    PubMed

    Maddox, Thomas M; Albert, Nancy M; Borden, William B; Curtis, Lesley H; Ferguson, T Bruce; Kao, David P; Marcus, Gregory M; Peterson, Eric D; Redberg, Rita; Rumsfeld, John S; Shah, Nilay D; Tcheng, James E

    2017-04-04

    The learning healthcare system uses health information technology and the health data infrastructure to apply scientific evidence at the point of clinical care while simultaneously collecting insights from that care to promote innovation in optimal healthcare delivery and to fuel new scientific discovery. To achieve these goals, the learning healthcare system requires systematic redesign of the current healthcare system, focusing on 4 major domains: science and informatics, patient-clinician partnerships, incentives, and development of a continuous learning culture. This scientific statement provides an overview of how these learning healthcare system domains can be realized in cardiovascular disease care. Current cardiovascular disease care innovations in informatics, data uses, patient engagement, continuous learning culture, and incentives are profiled. In addition, recommendations for next steps for the development of a learning healthcare system in cardiovascular care are presented. © 2017 American Heart Association, Inc.

  9. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  10. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    PubMed

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  11. Blade runner. Blade server and virtualization technology can help hospitals save money--but they are far from silver bullets.

    PubMed

    Lawrence, Daphne

    2009-03-01

    Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.

  12. A Virtual Environment for Resilient Infrastructure Modeling and Design

    DTIC Science & Technology

    2015-09-01

    Security CI Critical Infrastructure CID Center for Infrastructure Defense CSV Comma Separated Value DAD Defender-Attacker-Defender DHS Department...responses to disruptive events (e.g., cascading failure behavior) in a context- rich , controlled environment for exercises, education, and training...The general attacker-defender (AD) and defender-attacker-defender ( DAD ) models for CI are defined in Brown et al. (2006). These models help

  13. 49 CFR 15.5 - Sensitive security information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... sources and methods used to gather or develop threat information, including threats against cyber infrastructure. (8) Security measures. Specific details of aviation or maritime transportation security measures... infrastructure asset information. Any list identifying systems or assets, whether physical or virtual, so vital...

  14. 49 CFR 15.5 - Sensitive security information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sources and methods used to gather or develop threat information, including threats against cyber infrastructure. (8) Security measures. Specific details of aviation or maritime transportation security measures... infrastructure asset information. Any list identifying systems or assets, whether physical or virtual, so vital...

  15. 49 CFR 15.5 - Sensitive security information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... sources and methods used to gather or develop threat information, including threats against cyber infrastructure. (8) Security measures. Specific details of aviation or maritime transportation security measures... infrastructure asset information. Any list identifying systems or assets, whether physical or virtual, so vital...

  16. 49 CFR 15.5 - Sensitive security information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... sources and methods used to gather or develop threat information, including threats against cyber infrastructure. (8) Security measures. Specific details of aviation or maritime transportation security measures... infrastructure asset information. Any list identifying systems or assets, whether physical or virtual, so vital...

  17. NATO Bureaucratic & Infrastructure Transformation for the 21st Century

    DTIC Science & Technology

    1998-04-22

    humamtanan, and out-of- area mlsslons not related to Arncles 4 and 5 ’ K-4-1-0 Office of InformatIon and Press \\ATO Handbook Bmss& October 1995 p 20...importantI>, soclall> and culturall > related interests We have strong polmcal and economic ties to the region \\\\ hlch certainly Just& contmuatlon of the NATO...regional air defense complement U S unilateral operations m the area Fmallq. close ties have developed at the military le\\ el. faclhtatmg ewzellent

  18. The architecture of a distributed medical dictionary.

    PubMed

    Fowler, J; Buffone, G; Moreau, D

    1995-01-01

    Exploiting high-speed computer networks to provide a national medical information infrastructure is a goal for medical informatics. The Distributed Medical Dictionary under development at Baylor College of Medicine is a model for an architecture that supports collaborative development of a distributed online medical terminology knowledge-base. A prototype is described that illustrates the concept. Issues that must be addressed by such a system include high availability, acceptable response time, support for local idiom, and control of vocabulary.

  19. Cancer Genomics: Integrative and Scalable Solutions in R / Bioconductor | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    This proposal develops scalable R / Bioconductor software infrastructure and data resources to integrate complex, heterogeneous, and large cancer genomic experiments. The falling cost of genomic assays facilitates collection of multiple data types (e.g., gene and transcript expression, structural variation, copy number, methylation, and microRNA data) from a set of clinical specimens. Furthermore, substantial resources are now available from large consortium activities like The Cancer Genome Atlas (TCGA).

  20. Australian DefenceScience. Volume 16, Number 2, Winter

    DTIC Science & Technology

    2008-01-01

    Making Virtual Advisers speedily interactive To provide an authentically interactive experience for humans working with Virtual Advisers, the Virtual...peer trusted and strong authentication for checking of security credentials without recourse to third parties or infrastructure, thus eliminating...multiple passwords, or carry around multiple security tokens.” Each CodeStick device is readied for use with a biometric authentication process. Since

  1. Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing

    NASA Astrophysics Data System (ADS)

    Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.

    2006-05-01

    Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.

  2. The Next Generation of Lab and Classroom Computing - The Silver Lining

    DTIC Science & Technology

    2016-12-01

    desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The research... infrastructure , VDI, hardware cost, software cost, manpower, availability, cloud computing, private cloud, bring your own device, BYOD, thin client...virtual desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The

  3. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities

    PubMed Central

    2010-01-01

    Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787

  4. The OCHIN community information network: bringing together community health centers, information technology, and data to support a patient-centered medical village.

    PubMed

    Devoe, Jennifer E; Sears, Abigail

    2013-01-01

    Creating integrated, comprehensive care practices requires access to data and informatics expertise. Information technology (IT) resources are not readily available to individual practices. One model of shared IT resources and learning is a "patient-centered medical village." We describe the OCHIN Community Health Information Network as an example of this model; community practices have come together collectively to form an organization that leverages shared IT expertise, resources, and data, providing members with the means to fully capitalize on new technologies that support improved care. This collaborative facilitates the identification of "problem sheds" through surveillance of network-wide data, enables shared learning regarding best practices, and provides a "community laboratory" for practice-based research. As an example of a community of solution, OCHIN uses health IT and data-sharing innovations to enhance partnerships between public health leaders, clinicians in community health centers, informatics experts, and policy makers. OCHIN community partners benefit from the shared IT resource (eg, a linked electronic health record, centralized data warehouse, informatics, and improvement expertise). This patient-centered medical village provides (1) the collective mechanism to build community-tailored IT solutions, (2) "neighbors" to share data and improvement strategies, and (3) infrastructure to support innovations based on electronic health records across communities, using experimental approaches.

  5. Consumer-Mediated Health Information Exchanges: The 2012 ACMI Debate

    PubMed Central

    Cimino, James J.; Frisse, Mark; Halamka, John; Sweeney, Latanya; Yasnoff, William

    2017-01-01

    The American College of Medical Informatics (ACMI) sponsors periodic debates during the American Medical Informatics Fall Symposium to highlight important informatics issues of broad interest. In 2012, a panel debated the following topic: “Resolved: Health Information Exchange Organizations Should Shift Their Principal Focus to Consumer-Mediated Exchange in Order to Facilitate the Rapid Development of Effective, Scalable, and Sustainable Health Information Infrastructure.” Those supporting the proposition emphasized the need for consumer-controlled community repositories of electronic health records (health record banks) to address privacy, stakeholder cooperation, scalability, and sustainability. Those opposing the proposition emphasized that the current healthcare environment is so complex that development of consumer control will take time and that even then, consumers may not be able to mediate their information effectively. While privately, each discussant recognizes that there are many sides to this complex issue, each followed the debater’s tradition of taking an extreme position in order emphasize some of the polarizing aspects in the short time allotted them. In preparing this summary, we sought to convey the substance and spirit of the debate in printed form. Transcripts of the actual debate were edited for clarity, and appropriate supporting citations were added for the further edification of the reader. PMID:24561078

  6. A Collaborative Data Scientist Framework for both Primary and Secondary Education

    NASA Astrophysics Data System (ADS)

    Branch, B. D.

    2011-12-01

    The earth science data educational pipeline may be dependent on K-20 outcomes. Thus, a challenge for earth science and space informatics education or generational knowledge transfer consideration may be a non-existing or cost prohibitive pedagogical earth science reality. Such may require a technological infrastructure, a validated assessment system, and collaboration among stakeholders of primary and secondary education. Moreover, the K-20 paradigms may engage separate science and technology preparation standards when fundamental informatics requires an integrated pedagogical approach. In simple terms, a collaborative earth science training program for a subset of disciplines may a pragmatics means for formal data scientist training that is sustainable as technology evolves and data-sharing policy becomes a norm of data literacy. As the Group Earth Observation Systems of Systems (GEOSS) has a 10-work plan, educational stakeholders may find funding avenues if government can see earth science data training as a valuable job skill and societal need. This proposed framework suggested that ontological literacy, database management and storage management and data sharing capability are fundamental informatics concepts of this proposed framework where societal engagement is incited. Here all STEM disciplines could incite an integrated approach to mature such as learning metrics in their matriculation and assessment systems. The NSF's Earth Cube and Europe's WISE may represent best cased for such framework implementation.

  7. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    PubMed

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  8. Facilitating biomedical researchers' interrogation of electronic health record data: Ideas from outside of biomedical informatics.

    PubMed

    Hruby, Gregory W; Matsoukas, Konstantina; Cimino, James J; Weng, Chunhua

    2016-04-01

    Electronic health records (EHR) are a vital data resource for research uses, including cohort identification, phenotyping, pharmacovigilance, and public health surveillance. To realize the promise of EHR data for accelerating clinical research, it is imperative to enable efficient and autonomous EHR data interrogation by end users such as biomedical researchers. This paper surveys state-of-art approaches and key methodological considerations to this purpose. We adapted a previously published conceptual framework for interactive information retrieval, which defines three entities: user, channel, and source, by elaborating on channels for query formulation in the context of facilitating end users to interrogate EHR data. We show the current progress in biomedical informatics mainly lies in support for query execution and information modeling, primarily due to emphases on infrastructure development for data integration and data access via self-service query tools, but has neglected user support needed during iteratively query formulation processes, which can be costly and error-prone. In contrast, the information science literature has offered elaborate theories and methods for user modeling and query formulation support. The two bodies of literature are complementary, implying opportunities for cross-disciplinary idea exchange. On this basis, we outline the directions for future informatics research to improve our understanding of user needs and requirements for facilitating autonomous interrogation of EHR data by biomedical researchers. We suggest that cross-disciplinary translational research between biomedical informatics and information science can benefit our research in facilitating efficient data access in life sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Education in Biomedical and Health Informatics in the Web 3.0 Era: Standards for data, curricula, and activities. Contribution of the IMIA Working Group on Health and Medical Informatics Education.

    PubMed

    Otero, P; Hersh, W

    2011-01-01

    Web 3.0 is transforming the World Wide Web by allowing knowledge and reasoning to be gleaned from its content. Describe a new scenario in education and training known as "Education 3.0" that can help in the promotion of learning in health informatics in a collaborative way. Review of the current standards available for curricula and learning activities in in Biomedical and Health Informatics (BMHI) for a Web 3.0 scenario. A new scenario known as "Education 3.0" can provide open educational resources created and reused throughout different institutions and improved by means of an international collaborative knowledge powered by the use of E-learning. Currently there are standards that could be used in identifying and deliver content in education in BMHI in the semantic web era such as Resource Description Format (RDF), Web Ontology Language (OWL) and Sharable Content Object Reference Model (SCORM). In addition, there are other standards to support healthcare education and training. There are few experiences in the use of standards in e-learning in BMHI published in the literature. Web 3.0 can propose new approaches to building the BMHI workforce so there is a need to build tools as knowledge infrastructure to leverage it. The usefulness of standards in the content and competencies of training programs in BMHI needs more experience and research so as to promote the interoperability and sharing of resources in this growing discipline.

  10. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  11. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  12. Enabling Research without Geographical Boundaries via Collaborative Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Gesing, S.

    2016-12-01

    Collaborative research infrastructures on global scale for earth and space sciences face a plethora of challenges from technical implementations to organizational aspects. Science gateways - also known as virtual research environments (VREs) or virtual laboratories - address part of such challenges by providing end-to-end solutions to aid researchers to focus on their specific research questions without the need to become acquainted with the technical details of the complex underlying infrastructures. In general, they provide a single point of entry to tools and data irrespective of organizational boundaries and thus make scientific discoveries easier and faster. The importance of science gateways has been recognized on national as well as on international level by funding bodies and by organizations. For example, the US NSF has just funded a Science Gateways Community Institute, which offers support, consultancy and open accessible software repositories for users and developers; Horizon 2020 provides funding for virtual research environments in Europe, which has led to projects such as VRE4EIC (A Europe-wide Interoperable Virtual Research Environment to Empower Multidisciplinary Research Communities and Accelerate Innovation and Collaboration); national or continental research infrastructures such as XSEDE in the USA, Nectar in Australia or EGI in Europe support the development and uptake of science gateways; the global initiatives International Coalition on Science Gateways, the RDA Virtual Research Environment Interest Group as well as the IEEE Technical Area on Science Gateways have been founded to provide global leadership on future directions for science gateways in general and facilitate awareness for science gateways. This presentation will give an overview on these projects and initiatives aiming at supporting domain researchers and developers with measures for the efficient creation of science gateways, for increasing their usability and sustainability under consideration of the breadth of topics in the context of science gateways. It will go into detail for the challenges the community faces for collaborative research on global scale without geographical boundaries and will provide suggestions for further enhancing the outreach to domain researchers.

  13. Business Case Analysis of the Marine Corps Base Pendleton Virtual Smart Grid

    DTIC Science & Technology

    2017-06-01

    Metering Infrastructure on DOD installations. An examination of five case studies highlights the costs and benefits of the Virtual Smart Grid (VSG...studies highlights the costs and benefits of the Virtual Smart Grid (VSG) developed by Space and Naval Warfare Systems Command for use at Marine Corps...41 A. SMART GRID BENEFITS .....................................................................41 B. SUMMARY OF VSG ESTIMATED COSTS AND BENEFITS

  14. Desktop Virtualization: Applications and Considerations

    ERIC Educational Resources Information Center

    Hodgman, Matthew R.

    2013-01-01

    As educational technology continues to rapidly become a vital part of a school district's infrastructure, desktop virtualization promises to provide cost-effective and education-enhancing solutions to school-based computer technology problems in school systems locally and abroad. This article outlines the history of and basic concepts behind…

  15. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    PubMed

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  16. caGrid 1.0: a Grid enterprise architecture for cancer research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  17. A cyber infrastructure for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  18. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  19. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  20. Information security system based on virtual-optics imaging methodology and public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong

    In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.

  1. Virtual Control Systems Environment (VCSE)

    ScienceCinema

    Atkins, Will

    2018-02-14

    Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.

  2. DIRAC universal pilots

    NASA Astrophysics Data System (ADS)

    Stagni, F.; McNab, A.; Luzzi, C.; Krzemien, W.; Consortium, DIRAC

    2017-10-01

    In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities for multi-processor computing slots to the users. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to provide the transparent, uniform interface has become essential. The transparent access to the underlying resources is realized by implementing the pilot model. DIRAC’s newest generation of generic pilots (the so-called Pilots 2.0) are the “pilots for all the skies”, and have been successfully released in production more than a year ago. They use a plugin mechanism that makes them easily adaptable. Pilots 2.0 have been used for fetching and running jobs on every type of resource, being it a Worker Node (WN) behind a CREAM/ARC/HTCondor/DIRAC Computing element, a Virtual Machine running on IaaC infrastructures like Vac or BOINC, on IaaS cloud resources managed by Vcycle, the LHCb High Level Trigger farm nodes, and any type of opportunistic computing resource. Make a machine a “Pilot Machine”, and all diversities between them will disappear. This contribution describes how pilots are made suitable for different resources, and the recent steps taken towards a fully unified framework, including monitoring. Also, the cases of multi-processor computing slots either on real or virtual machines, with the whole node or a partition of it, is discussed.

  3. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  4. Storing and using health data in a virtual private cloud.

    PubMed

    Regola, Nathan; Chawla, Nitesh V

    2013-03-13

    Electronic health records are being adopted at a rapid rate due to increased funding from the US federal government. Health data provide the opportunity to identify possible improvements in health care delivery by applying data mining and statistical methods to the data and will also enable a wide variety of new applications that will be meaningful to patients and medical professionals. Researchers are often granted access to health care data to assist in the data mining process, but HIPAA regulations mandate comprehensive safeguards to protect the data. Often universities (and presumably other research organizations) have an enterprise information technology infrastructure and a research infrastructure. Unfortunately, both of these infrastructures are generally not appropriate for sensitive research data such as HIPAA, as they require special accommodations on the part of the enterprise information technology (or increased security on the part of the research computing environment). Cloud computing, which is a concept that allows organizations to build complex infrastructures on leased resources, is rapidly evolving to the point that it is possible to build sophisticated network architectures with advanced security capabilities. We present a prototype infrastructure in Amazon's Virtual Private Cloud to allow researchers and practitioners to utilize the data in a HIPAA-compliant environment.

  5. Virtual Network Embedding via Monte Carlo Tree Search.

    PubMed

    Haeri, Soroush; Trajkovic, Ljiljana

    2018-02-01

    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  6. Virtualized Networks and Virtualized Optical Line Terminal (vOLT)

    NASA Astrophysics Data System (ADS)

    Ma, Jonathan; Israel, Stephen

    2017-03-01

    The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.

  7. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  8. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  9. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Overview

    NASA Astrophysics Data System (ADS)

    Cui, C.; Yu, C.; Xiao, J.; He, B.; Li, C.; Fan, D.; Wang, C.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Cao, Z.; Wang, J.; Yin, S.; Fan, Y.; Wang, J.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Tasks such as proposal submission, proposal peer-review, data archiving, data quality control, data release and open access, Cloud based data processing and analyzing, will be all supported on the platform. It will act as a full lifecycle management system for astronomical data and telescopes. Achievements from international Virtual Observatories and Cloud Computing are adopted heavily. In this paper, backgrounds of the project, key features of the system, and latest progresses are introduced.

  10. Optical network democratization.

    PubMed

    Nejabati, Reza; Peng, Shuping; Simeonidou, Dimitra

    2016-03-06

    The current Internet infrastructure is not able to support independent evolution and innovation at physical and network layer functionalities, protocols and services, while at same time supporting the increasing bandwidth demands of evolving and heterogeneous applications. This paper addresses this problem by proposing a completely democratized optical network infrastructure. It introduces the novel concepts of the optical white box and bare metal optical switch as key technology enablers for democratizing optical networks. These are programmable optical switches whose hardware is loosely connected internally and is completely separated from their control software. To alleviate their complexity, a multi-dimensional abstraction mechanism using software-defined network technology is proposed. It creates a universal model of the proposed switches without exposing their technological details. It also enables a conventional network programmer to develop network applications for control of the optical network without specific technical knowledge of the physical layer. Furthermore, a novel optical network virtualization mechanism is proposed, enabling the composition and operation of multiple coexisting and application-specific virtual optical networks sharing the same physical infrastructure. Finally, the optical white box and the abstraction mechanism are experimentally evaluated, while the virtualization mechanism is evaluated with simulation. © 2016 The Author(s).

  11. The Barcode of Life Data Portal: Bridging the Biodiversity Informatics Divide for DNA Barcoding

    PubMed Central

    Sarkar, Indra Neil; Trizna, Michael

    2011-01-01

    With the volume of molecular sequence data that is systematically being generated globally, there is a need for centralized resources for data exploration and analytics. DNA Barcode initiatives are on track to generate a compendium of molecular sequence–based signatures for identifying animals and plants. To date, the range of available data exploration and analytic tools to explore these data have only been available in a boutique form—often representing a frustrating hurdle for many researchers that may not necessarily have resources to install or implement algorithms described by the analytic community. The Barcode of Life Data Portal (BDP) is a first step towards integrating the latest biodiversity informatics innovations with molecular sequence data from DNA barcoding. Through establishment of community driven standards, based on discussion with the Data Analysis Working Group (DAWG) of the Consortium for the Barcode of Life (CBOL), the BDP provides an infrastructure for incorporation of existing and next-generation DNA barcode analytic applications in an open forum. PMID:21818249

  12. Lipidomics informatics for life-science.

    PubMed

    Schwudke, D; Shevchenko, A; Hoffmann, N; Ahrends, R

    2017-11-10

    Lipidomics encompasses analytical approaches that aim to identify and quantify the complete set of lipids, defined as lipidome in a given cell, tissue or organism as well as their interactions with other molecules. The majority of lipidomics workflows is based on mass spectrometry and has been proven as a powerful tool in system biology in concert with other Omics disciplines. Unfortunately, bioinformatics infrastructures for this relatively young discipline are limited only to some specialists. Search engines, quantification algorithms, visualization tools and databases developed by the 'Lipidomics Informatics for Life-Science' (LIFS) partners will be restructured and standardized to provide broad access to these specialized bioinformatics pipelines. There are many medical challenges related to lipid metabolic alterations that will be fostered by capacity building suggested by LIFS. LIFS as member of the 'German Network for Bioinformatics' (de.NBI) node for 'Bioinformatics for Proteomics' (BioInfra.Prot) and will provide access to the described software as well as to tutorials and consulting services via a unified web-portal. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Leveraging biomedical ontologies and annotation services to organize microbiome data from Mammalian hosts.

    PubMed

    Sarkar, Indra Neil

    2010-11-13

    A better understanding of commensal microbiotic communities ("microbiomes") may provide valuable insights to human health. Towards this goal, an essential step may be the development of approaches to organize data that can enable comparative hypotheses across mammalian microbiomes. The present study explores the feasibility of using existing biomedical informatics resources - especially focusing on those available at the National Center for Biomedical Ontology - to organize microbiome data contained within large sequence repositories, such as GenBank. The results indicate that the Foundational Model of Anatomy and SNOMED CT can be used to organize greater than 90% of the bacterial organisms associated with 10 domesticated mammalian species. The promising findings suggest that the current biomedical informatics infrastructure may be used towards the organizing of microbiome data beyond humans. Furthermore, the results identify key concepts that might be organized into a semantic structure for incorporation into subsequent annotations that could facilitate comparative biomedical hypotheses pertaining to human health.

  14. Informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness.

    PubMed

    Herasevich, Vitaly; Pickering, Brian W; Dong, Yue; Peters, Steve G; Gajic, Ognjen

    2010-03-01

    To develop and validate an informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness. Using open-schema data feeds imported from electronic medical records (EMRs), we developed a near-real-time relational database (Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart). Imported data domains included physiologic monitoring, medication orders, laboratory and radiologic investigations, and physician and nursing notes. Open database connectivity supported the use of Boolean combinations of data that allowed authorized users to develop syndrome surveillance, decision support, and reporting (data "sniffers") routines. Random samples of database entries in each category were validated against corresponding independent manual reviews. The Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart accommodates, on average, 15,000 admissions to the intensive care unit (ICU) per year and 200,000 vital records per day. Agreement between database entries and manual EMR audits was high for sex, mortality, and use of mechanical ventilation (kappa, 1.0 for all) and for age and laboratory and monitored data (Bland-Altman mean difference +/- SD, 1(0) for all). Agreement was lower for interpreted or calculated variables, such as specific syndrome diagnoses (kappa, 0.5 for acute lung injury), duration of ICU stay (mean difference +/- SD, 0.43+/-0.2), or duration of mechanical ventilation (mean difference +/- SD, 0.2+/-0.9). Extraction of essential ICU data from a hospital EMR into an open, integrative database facilitates process control, reporting, syndrome surveillance, decision support, and outcome research in the ICU.

  15. Segmentation and Quantitative Analysis of Epithelial Tissues.

    PubMed

    Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne

    2016-01-01

    Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.

  16. Esthetic considerations for the treatment of the edentulous maxilla based on current informatic alternatives: a case report.

    PubMed

    Rodríguez-Tizcareño, Mario H; Barajas, Lizbeth; Pérez-Gásque, Marisol; Gómez, Salvador

    2012-06-01

    This report presents a protocol used to transfer the virtual treatment plan data to the surgical and prosthetic reality and its clinical application, bone site augmentation with computer-custom milled bovine bone graft blocks to their ideal architecture form, implant insertion based on image-guided stent fabrication, and the restorative manufacturing process through computed tomography-based software programs and navigation systems and the computer-aided design and manufacturing techniques for the treatment of the edentulous maxilla.

  17. Cloud Infrastructure & Applications - CloudIA

    NASA Astrophysics Data System (ADS)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  18. A proposed national research and development agenda for population health informatics: summary recommendations from a national expert workshop.

    PubMed

    Kharrazi, Hadi; Lasser, Elyse C; Yasnoff, William A; Loonsk, John; Advani, Aneel; Lehmann, Harold P; Chin, David C; Weiner, Jonathan P

    2017-01-01

    The Johns Hopkins Center for Population Health IT hosted a 1-day symposium sponsored by the National Library of Medicine to help develop a national research and development (R&D) agenda for the emerging field of population health informatics (PopHI). The symposium provided a venue for national experts to brainstorm, identify, discuss, and prioritize the top challenges and opportunities in the PopHI field, as well as R&D areas to address these. This manuscript summarizes the findings of the PopHI symposium. The symposium participants' recommendations have been categorized into 13 overarching themes, including policy alignment, data governance, sustainability and incentives, and standards/interoperability. The proposed consensus-based national agenda for PopHI consisted of 18 priority recommendations grouped into 4 broad goals: (1) Developing a standardized collaborative framework and infrastructure, (2) Advancing technical tools and methods, (3) Developing a scientific evidence and knowledge base, and (4) Developing an appropriate framework for policy, privacy, and sustainability. There was a substantial amount of agreement between all the participants on the challenges and opportunities for PopHI as well as on the actions that needed to be taken to address these. PopHI is a rapidly growing field that has emerged to address the population dimension of the Triple Aim. The proposed PopHI R&D agenda is comprehensive and timely, but should be considered only a starting-point, given that ongoing developments in health policy, population health management, and informatics are very dynamic, suggesting that the agenda will require constant monitoring and updating. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Clinical Research Informatics: Supporting the Research Study Lifecycle.

    PubMed

    Johnson, S B

    2017-08-01

    Objectives: The primary goal of this review is to summarize significant developments in the field of Clinical Research Informatics (CRI) over the years 2015-2016. The secondary goal is to contribute to a deeper understanding of CRI as a field, through the development of a strategy for searching and classifying CRI publications. Methods: A search strategy was developed to query the PubMed database, using medical subject headings to both select and exclude articles, and filtering publications by date and other characteristics. A manual review classified publications using stages in the "research study lifecycle", with key stages that include study definition, participant enrollment, data management, data analysis, and results dissemination. Results: The search strategy generated 510 publications. The manual classification identified 125 publications as relevant to CRI, which were classified into seven different stages of the research lifecycle, and one additional class that pertained to multiple stages, referring to general infrastructure or standards. Important cross-cutting themes included new applications of electronic media (Internet, social media, mobile devices), standardization of data and procedures, and increased automation through the use of data mining and big data methods. Conclusions: The review revealed increased interest and support for CRI in large-scale projects across institutions, regionally, nationally, and internationally. A search strategy based on medical subject headings can find many relevant papers, but a large number of non-relevant papers need to be detected using text words which pertain to closely related fields such as computational statistics and clinical informatics. The research lifecycle was useful as a classification scheme by highlighting the relevance to the users of clinical research informatics solutions. Georg Thieme Verlag KG Stuttgart.

  20. Virtual-optical information security system based on public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben

    2005-01-01

    A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.

  1. Concurrent access to a virtual microscope using a web service oriented architecture

    NASA Astrophysics Data System (ADS)

    Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo

    2013-11-01

    Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.

  2. The Clue to Minimizing the Developer-User Divide by Good Practice in Earth and Space Science Informatics

    NASA Astrophysics Data System (ADS)

    Messerotti, M.

    2009-04-01

    Earth and Space Science research, as well as many other disciplines, can nowadays benefit from advanced data handling techniques and tools capable to significantly relieve the scientist of the burden of data search, retrieval, visualization and manipulation, and to exploit the data information content. Some typical examples are Virtual Observatories (VO) specific to a variety of sub-disciplines but anyway interlinked, a feature intrinsic to the VO architecture, Virtual Globes as advanced 3D selection and visualization interfaces to distributed data repositories, and the Global Earth Observation System of Systems. These information systems are proving also effective in education and outreach activities as they are usable via web interfaces to give access to, to display and to download nonhomogeneous datasets in order to raise the awareness of the students and the public on the relevant disciplines. Despite of that, all of this effective machineries are still poorly used both by the scientific community and by the community active in education and outreach. All such infrastructures are designed and developed according to the state-of-the-art information and computer engineering techniques and are provided with top features such as ontology- and semantics-based data management, and advanced unified web-based interfaces. Anyway, a careful analysis of the issue mentioned above indicates a key aspect that play a major role, i.e., the inadequate interaction with the users' communities during the design, the development, the deployment and the test phases. Even the best technical tool can appear inadequate to the final user when it does not meet the user's requirements in terms of achievable goals and use friendliness. In this work, we consider the user-side features to be taken into account for the optimum exploitation of an information system in the framework of the interaction among the design engineers and the target communities towards the setting of a good practice for minimizing the developer-user divide.

  3. VERA 3.6 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.

    The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.

  4. UAS Integration in the NAS Project: Integrated Test and LVC Infrastructure

    NASA Technical Reports Server (NTRS)

    Murphy, Jim; Hoang, Ty

    2015-01-01

    Overview presentation of the Integrated Test and Evaluation sub-project of the Unmanned Aircraft System (UAS) in the National Airspace System (NAS). The emphasis of the presentation is the Live, Virtual, and Constructive (LVC) system (a broadly used name for classifying modeling and simulation) infrastructure and use of external assets and connection.

  5. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    NASA Astrophysics Data System (ADS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-12-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  6. Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.

    PubMed

    Harrop, V M

    2001-01-01

    Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery.

  7. Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.

    PubMed Central

    Harrop, V. M.

    2001-01-01

    Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery. PMID:11825189

  8. Bringing Together Community Health Centers, Information Technology and Data to Support a Patient-Centered Medical Village from the OCHIN community of solutions

    PubMed Central

    DeVoe, Jennifer E.; Sears, Abigail

    2013-01-01

    Creating integrated, comprehensive care practices requires access to data and informatics expertise. Information technology (IT) resources are not readily available to individual practices. One model of shared IT resources and learning is a “patient-centered medical village.” We describe the OCHIN Community Health Information Network as an example of this model where community practices have come together collectively to form an organization which leverages shared IT expertise, resources, and data, providing members with the means to fully capitalize on new technologies that support improved care. This collaborative facilitates the identification of “problem-sheds” through surveillance of network-wide data, enables shared learning regarding best practices, and provides a “community laboratory” for practice-based research. As an example of a Community of Solution, OCHIN utilizes health IT and data-sharing innovations to enhance partnerships between public health leaders, community health center clinicians, informatics experts, and policy makers. OCHIN community partners benefit from the shared IT resource (e.g. a linked electronic health record (EHR), centralized data warehouse, informatics and improvement expertise). This patient-centered medical village provides (1) the collective mechanism to build community tailored IT solutions, (2) “neighbors” to share data and improvement strategies, and (3) infrastructure to support EHR-based innovations across communities, using experimental approaches. PMID:23657695

  9. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  10. Design and Evaluation of a Virtual Environment Infrastructure to Support Experiments in Social Behavior

    ERIC Educational Resources Information Center

    Hmeljak, Dimitrij

    2010-01-01

    Virtual worlds provide useful platforms for social behavioral research, but impose stringent limitations on the rules of engagement, responsiveness, and data collection, along with other resource restrictions. The major challenge from a computer science standpoint in developing group behavior applications for such environments is accommodating the…

  11. Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters

    ERIC Educational Resources Information Center

    Younge, Andrew J.

    2016-01-01

    With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…

  12. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  13. The Virtual Observatory as Critical Scientific Cyber Infrastructure.

    NASA Astrophysics Data System (ADS)

    Fox, P.

    2006-12-01

    Virtual Observatories can provide access to vast stores of scientific data: observations and models as well as services to analyze, visualize and assimilate multiple data sources. As these electronic resource become widely used, there is potential to improve the efficiency, interoperability, collaborative potential, and impact of a wide range of interdisciplinary scientific research. In addition, we know that as the diversity of collaborative science and volume of accompanying data and data generators/consumers grows so do the challenges. In order for Virtual Observatories to realize their potential and become indispensible infrastructure, social, political and technical challenges need to be addressed concerning (at least) roles and responsibilities, data and services policies, representations and interoperability of services, data search, access, and usability. In this presentation, we discuss several concepts and instances of the Virtual Observatory and related projects that may, and may not, be meeting the abovementioned challanges. We also argue that science driven needs and architecture development are critical in the development of sustainable (and thus agile) cyberinfrastructure. Finally we some present or emerging candidate technologies and organizational constructs that will need to be pursued.

  14. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  15. World Wind: NASA's Virtual Globe

    NASA Astrophysics Data System (ADS)

    Hogan, P.

    2007-12-01

    Virtual globes have set the standard for information exchange. Once you've experienced the visually rich and highly compelling nature of data delivered via virtual globes with their highly engaging context of 3D, it's hard to go back to a flat 2D world. Just as the sawbones of not-too-long-ago have given way to sophisticated surgical operating theater, today's medium for information exchange is just beginning to leap from the staid chalkboards and remote libraries to fingertip navigable 3D worlds. How we harness this technology to serve a world inundated with information will describe the quality of our future. Our instincts for discovery and entertainment urge us on. There's so much we could know if the world's knowledge was presented to us in its natural context. Virtual globes are almost magical in their ability to reveal natural wonders. Anyone flying along a chain of volcanoes, a mid-ocean ridge or deep ocean trench, while simultaneously seeing the different depths to the history of earthquakes in those areas, will be delighted to sense Earth's dynamic nature in a way that would otherwise take several paragraphs of "boring" text. The sophisticated concepts related to global climate change would be far more comprehensible when experienced via a virtual globe. There is a large universe of public and private geospatial data sets that virtual globes can bring to light. The benefit derived from access to this data within virtual globes represents a significant return on investment for government, industry, the general public, and especially in the realm of education. Data access remains a key issue. Just as the highway infrastructure allows unimpeded access from point A to point B, an open standards-based infrastructure for data access allows virtual globes to exchange data in the most efficient manner possible. This data can be either free or proprietary. The Open Geospatial Consortium is providing the leadership necessary for this open standards-based data access infrastructure. The open-source community plays a crucial role in advancing virtual globe technology. This world community identifies, tracks and resolves technical problems, suggests new features and source code modifications, and often provides high-resolution data sets and other types of user-generated content, all while extending the functionality of virtual globe technology. NASA World Wind is one example of open source virtual globe technology that provides the world with the ability to build any desired functionality and make any desired data accessible.

  16. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.

  17. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  18. Universal Fragment Descriptors for Predicting Electronic and Mechanical Properties of Inorganic Crystals

    NASA Astrophysics Data System (ADS)

    Oses, Corey; Isayev, Olexandr; Toher, Cormac; Curtarolo, Stefano; Tropsha, Alexander

    Historically, materials discovery is driven by a laborious trial-and-error process. The growth of materials databases and emerging informatics approaches finally offer the opportunity to transform this practice into data- and knowledge-driven rational design-accelerating discovery of novel materials exhibiting desired properties. By using data from the AFLOW repository for high-throughput, ab-initio calculations, we have generated Quantitative Materials Structure-Property Relationship (QMSPR) models to predict critical materials properties, including the metal/insulator classification, band gap energy, and bulk modulus. The prediction accuracy obtained with these QMSPR models approaches training data for virtually any stoichiometric inorganic crystalline material. We attribute the success and universality of these models to the construction of new materials descriptors-referred to as the universal Property-Labeled Material Fragments (PLMF). This representation affords straightforward model interpretation in terms of simple heuristic design rules that could guide rational materials design. This proof-of-concept study demonstrates the power of materials informatics to dramatically accelerate the search for new materials.

  19. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    PubMed

    Cole, Brian S; Moore, Jason H

    2018-03-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  20. Eleven quick tips for architecting biomedical informatics workflows with cloud computing

    PubMed Central

    Moore, Jason H.

    2018-01-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416

  1. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE PAGES

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...

    2016-02-18

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  2. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  3. caGrid 1.0: A Grid Enterprise Architecture for Cancer Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-01-01

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIGTM) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIGTM. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5. PMID:18693901

  4. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  5. Storing and Using Health Data in a Virtual Private Cloud

    PubMed Central

    Regola, Nathan

    2013-01-01

    Electronic health records are being adopted at a rapid rate due to increased funding from the US federal government. Health data provide the opportunity to identify possible improvements in health care delivery by applying data mining and statistical methods to the data and will also enable a wide variety of new applications that will be meaningful to patients and medical professionals. Researchers are often granted access to health care data to assist in the data mining process, but HIPAA regulations mandate comprehensive safeguards to protect the data. Often universities (and presumably other research organizations) have an enterprise information technology infrastructure and a research infrastructure. Unfortunately, both of these infrastructures are generally not appropriate for sensitive research data such as HIPAA, as they require special accommodations on the part of the enterprise information technology (or increased security on the part of the research computing environment). Cloud computing, which is a concept that allows organizations to build complex infrastructures on leased resources, is rapidly evolving to the point that it is possible to build sophisticated network architectures with advanced security capabilities. We present a prototype infrastructure in Amazon’s Virtual Private Cloud to allow researchers and practitioners to utilize the data in a HIPAA-compliant environment. PMID:23485880

  6. Pilots 2.0: DIRAC pilots for all the skies

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.

    2015-12-01

    In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.

  7. VESPA: A community-driven Virtual Observatory in Planetary Science

    NASA Astrophysics Data System (ADS)

    Erard, S.; Cecconi, B.; Le Sidaner, P.; Rossi, A. P.; Capria, M. T.; Schmitt, B.; Génot, V.; André, N.; Vandaele, A. C.; Scherf, M.; Hueso, R.; Määttänen, A.; Thuillot, W.; Carry, B.; Achilleos, N.; Marmo, C.; Santolik, O.; Benson, K.; Fernique, P.; Beigbeder, L.; Millour, E.; Rousseau, B.; Andrieu, F.; Chauvin, C.; Minin, M.; Ivanoski, S.; Longobardo, A.; Bollard, P.; Albert, D.; Gangloff, M.; Jourdane, N.; Bouchemit, M.; Glorian, J.-M.; Trompet, L.; Al-Ubaidi, T.; Juaristi, J.; Desmars, J.; Guio, P.; Delaa, O.; Lagain, A.; Soucek, J.; Pisa, D.

    2018-01-01

    The VESPA data access system focuses on applying Virtual Observatory (VO) standards and tools to Planetary Science. Building on a previous EC-funded Europlanet program, it has reached maturity during the first year of a new Europlanet 2020 program (started in 2015 for 4 years). The infrastructure has been upgraded to handle many fields of Solar System studies, with a focus both on users and data providers. This paper describes the broad lines of the current VESPA infrastructure as seen by a potential user, and provides examples of real use cases in several thematic areas. These use cases are also intended to identify hints for future developments and adaptations of VO tools to Planetary Science.

  8. Virtual Civilian Aeromedical Evacuation Sustainment Training Project (V-CAEST)

    DTIC Science & Technology

    2015-08-01

    evacuation liaison team (AELT), and the mobile aeromedical staging facility (MASF). The content covered in the V-CAEST environment therefore covered the...environment was set-up in a large gymnasium building including a mock military plane and Mobile Aeromedical Staging Facility (MASF) located just...staffing exam backhoe scenarios exam infrastructure interface tsunami infrastructure commander telecommunication disrupting commander

  9. Virtual Labs (Science Gateways) as platforms for Free and Open Source Science

    NASA Astrophysics Data System (ADS)

    Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey

    2016-04-01

    The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.

  10. Raising Virtual Laboratories in Australia onto global platforms

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.

    2016-12-01

    Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a small (and often requested) step. Similarly, most of the tools, software, and other technologies could be shared across infrastructures globally. Therefore, it is now time to better connect the Australian VLs with similar initiatives elsewhere to create international platforms that can contribute to global research challenges.

  11. Virtual Astronomy: The Legacy of the Virtual Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Berriman, G. B.; Lazio, J.; Szalay, A. S.; Fabbiano, G.; Plante, R. L.; McGlynn, T. A.; Evans, J.; Emery Bunn, S.; Claro, M.; VAO Project Team

    2014-01-01

    Over the past ten years, the Virtual Astronomical Observatory (VAO, http://usvao.org) and its predecessor, the National Virtual Observatory (NVO), have developed and operated a software infrastructure consisting of standards and protocols for data and science software applications. The Virtual Observatory (VO) makes it possible to develop robust software for the discovery, access, and analysis of astronomical data. Every major publicly funded research organization in the US and worldwide has deployed at least some components of the VO infrastructure; tens of thousands of VO-enabled queries for data are invoked daily against catalog, image, and spectral data collections; and groups within the community have developed tools and applications building upon the VO infrastructure. Further, NVO and VAO have helped ensure access to data internationally by co-founding the International Virtual Observatory Alliance (IVOA, http://ivoa.net). The products of the VAO are being archived in a publicly accessible repository. Several science tools developed by the VAO will continue to be supported by the organizations that developed them: the Iris spectral energy distribution package (SAO), the Data Discovery Tool (STScI/MAST, HEASARC), and the scalable cross-comparison service (IPAC). The final year of VAO is focused on development of the data access protocol for data cubes, creation of Python language bindings to VO services, and deployment of a cloud-like data storage service that links to VO data discovery tools (SciDrive). We encourage the community to make use of these tools and services, to extend and improve them, and to carry on with the vision for virtual astronomy: astronomical research enabled by easy access to distributed data and computational resources. Funding for VAO development and operations has been provided jointly by NSF and NASA since May 2010. NSF funding will end in September 2014, though with the possibility of competitive solicitations for VO-based tool development. NASA intends to maintain core VO services such as the resource registry (the index of VO-accessible data collections), monitoring services, and a website as part of the remit of HEASARC, IPAC (IRSA, NED), and MAST.

  12. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  13. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  14. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  15. The World Wide Web and Higher Education: The Promise of Virtual Universities and Online Libraries.

    ERIC Educational Resources Information Center

    Barnard, John

    1997-01-01

    While many universities and colleges are emphasizing distance education as a way to reach working adults and control costs associated with maintaining campus infrastructures, the World Wide Web is beginning to provide a medium for offering courses to students anywhere in the world. Discusses virtual universities which combine the Web with other…

  16. The War Next Time: Countering Rogue States and Terrorists Armed with Chemical and Biological Weapons. Second Edition

    DTIC Science & Technology

    2004-04-01

    Washington in 1814. 20 Woolsey Virtually all of our infrastructure has been put together with this spirit of transparency and ease of access. About...containers that cross U.S. borders every day and we decide that U.S. customs has to start inspecting virtually all of the containers at ports, instead of the...flexibility by giving the United States a virtually unlimited range of response options. While ambiguity gives flexibility to policymakers, it also

  17. The State of Human Anatomy Teaching in the Medical Schools of Gulf Cooperation Council Countries: Present and future perspectives.

    PubMed

    Habbal, Omar

    2009-04-01

    Available literature on medical education charts an emerging trend in the field of anatomy. In the past decade, assisted by innovations in informatics and the paradigm shift in medical education, the hands-on experience of cadaver dissection has progressively become a relic of the past. Within the context of the situation in Gulf Cooperation Council countries, this paper compares the traditional teaching approach with the modern one that tends to emphasise technical gadgetry, virtual reality and plastic models rather than hands-on-experience to impart knowledge and skill. However, cadaver-based learning is an important building block for the future physician and surgeon since clinical astuteness is likely to rely on skills gained from hands-on experience rather than the tendency to learning through virtual reality found in modern curricula.

  18. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  19. Communications satellites in the national and global health care information infrastructure: their role, impact, and issues

    NASA Technical Reports Server (NTRS)

    Zuzek, J. E.; Bhasin, K. B.

    1996-01-01

    Health care services delivered from a distance, known collectively as telemedicine, are being increasingly demonstrated on various transmission media. Telemedicine activities have included diagnosis by a doctor at a remote location, emergency and disaster medical assistance, medical education, and medical informatics. The ability of communications satellites to offer communication channels and bandwidth on demand, connectivity to mobile, remote and under served regions, and global access will afford them a critical role for telemedicine applications within the National and Global Information Infrastructure (NII/GII). The importance that communications satellites will have in telemedicine applications within the NII/GII the differences in requirements for NII vs. GII, the major issues such as interoperability, confidentiality, quality, availability, and costs, and preliminary conclusions for future usability based on the review of several recent trails at national and global levels are presented.

  20. CREST biorepository for translational studies on malignant mesothelioma, lung cancer and other respiratory tract diseases: Informatics infrastructure and standardized annotation.

    PubMed

    Ugolini, Donatella; Neri, Monica; Bennati, Luca; Canessa, Pier Aldo; Casanova, Georgia; Lando, Cecilia; Leoncini, Giacomo; Marroni, Paola; Parodi, Barbara; Simonassi, Claudio; Bonassi, Stefano

    2012-03-01

    Advances in molecular epidemiology and translational research have led to the need for biospecimen collection. The Cancer of the Respiratory Tract (CREST) biorepository is concerned with pleural malignant mesothelioma (MM) and lung cancer (LC). The biorepository staff has collected demographic and epidemiological data directly from consenting subjects using a structured questionnaire, in agreement with The Public Population Project in Genomics (P(3)G). Clinical and follow-up data were collected. Sample data were also recorded. The architecture is based on a database designed with Microsoft Access. Data standardization was carried out to conform with established conventions or procedures. As from January 31, 2011, the overall number of recruited subjects was 1,857 (454 LC, 245 MM, 130 other cancers and 1,028 controls). Due to its infrastructure, CREST was able to join international projects, sharing samples and/or data with other research groups in the field. The data management system allows CREST to be involved, through a minimum data set, in the national project for the construction of the Italian network of Oncologic BioBanks (RIBBO), and in the infrastructure of a pan-European biobank network (BBMRI). The CREST biorepository is a valuable tool for translational studies on respiratory tract diseases, because of its simple and efficient infrastructure.

  1. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  2. Big Data Analytics Test Bed

    DTIC Science & Technology

    2013-09-01

    25 2. Backend Database Support ...............................................................25 3. Installing...29 A. SETUP VIRTUAL INFRASTRUCTURE ...................................................29 B...59 APPENDIX F. INSTALLING AND CONFIGURING BACKEND DATABASE SUPPORT FOR VCENTER

  3. Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)

    NASA Astrophysics Data System (ADS)

    Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium

    2015-08-01

    The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.

  4. Open source system OpenVPN in a function of Virtual Private Network

    NASA Astrophysics Data System (ADS)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  5. The Infrastructure of an Integrated Virtual Reality Environment for International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    This study is a continuation of the summer research of 1995 NASA/ASEE Summer Faculty Fellowship Program. This effort is to provide the infrastructure of an integrated Virtual Reality (VR) environment for the International Space Welding Experiment (ISWE) Analytical Tool and Trainer and the Microgravity Science Glovebox (MSG) Analytical Tool study. Due to the unavailability of the MSG CAD files and the 3D-CAD converter, little was done to the MSG study. However, the infrastructure of the integrated VR environment for ISWE is capable of performing the MSG study when the CAD files become available. Two primary goals are established for this research. First, the essential peripheral devices for an integrated VR environment will be studied and developed for the ISWE and MSG studies. Secondly, the training of the flight crew (astronaut) in general orientation, procedures, and location, orientation, and sequencing of the welding samples and tools are built into the VR system for studying the welding process and training the astronaut.

  6. Informatics for neglected diseases collaborations.

    PubMed

    Bost, Frederic; Jacobs, Robert T; Kowalczyk, Paul

    2010-05-01

    Many different public and private organizations from across the globe are collaborating on neglected diseases drug-discovery and development projects with the aim of identifying a cure for tropical infectious diseases. These neglected diseases collaborations require a global, secure, multi-organization data-management solution, combined with a platform that facilitates communication and supports collaborative work. This review discusses the solutions offered by 'Software as a Service' (SaaS) web-based platforms, despite notable challenges, and the evolution of these platforms required to foster efficient virtual research efforts by geographically dispersed scientists.

  7. VESPA: Developing the Planetary Science Virtual Observatory in H2020

    NASA Astrophysics Data System (ADS)

    Erard, S.; Cecconi, B.; Le Sidaner, P.; Capria, T.; Rossi, A. P.; Schmitt, B.; André, N.; Vandaele, A.-C.; Scherf, M.; Hueso, R.; Maattanen, A.; Thuillot, W.; Achilleos, N.; Marmo, C.; Santolik, O.; Benson, K.; Bollard, Ph.

    2015-10-01

    The Europlanet H2020 programme will develop a research infrastructure in Horizon 2020. The programme includes a follow-on to the FP7 activity aimed at developing the Planetary Science Virtual Observatory (VO). This activity is called VESPA, which stands for Virtual European Solar and Planetary Access. Building on the IDIS activity of Europlanet FP7, VESPA will distribute more data, will improve the connected tools and infrastructure, and will help developing a community of both users and data providers. One goal of the Europlanet FP7 programme was to set the basis for a European Virtual Observatory in Planetary Science. A prototype has been set up during FP7, most of the activity being dedicated to the definition of standards to handle data in this field. The aim was to facilitate searches in big archives as well as sparse databases, to make on-line data access and visualization possible, and to allow small data providers to make their data available in an interoperable environment with minimum effort. This system makes intensive use of studies and developments led in Astronomy (IVOA), Solar Science (HELIO), plasma physics (SPASE), and space archive services (IPDA). It remains consistent with extensions of IVOA standards.

  8. VESPA: developing the planetary science Virtual Observatory in H2020

    NASA Astrophysics Data System (ADS)

    Erard, Stéphane; Cecconi, Baptiste; Le Sidaner, Pierre; Capria, Teresa; Rossi, Angelo Pio

    2016-04-01

    The Europlanet H2020 programme will develop a research infrastructure in Horizon 2020. The programme includes a follow-on to the FP7 activity aimed at developing the Planetary Science Virtual Observatory (VO). This activity is called VESPA, which stands for Virtual European Solar and Planetary Access. Building on the IDIS activity of Europlanet FP7, VESPA will distribute more data, will improve the connected tools and infrastructure, and will help developing a community of both users and data providers. One goal of the Europlanet FP7 programme was to set the basis for a European Virtual Observatory in Planetary Science. A prototype has been set up during FP7, most of the activity being dedicated to the definition of standards to handle data in this field. The aim was to facilitate searches in big archives as well as sparse databases, to make on-line data access and visualization possible, and to allow small data providers to make their data available in an interoperable environment with minimum effort. This system makes intensive use of studies and developments led in Astronomy (IVOA), Solar Science (HELIO), plasma physics (SPASE), and space archive services (IPDA). It remains consistent with extensions of IVOA standards.

  9. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  10. Customizing Laboratory Information Systems: Closing the Functionality Gap.

    PubMed

    Gershkovich, Peter; Sinard, John H

    2015-09-01

    Highly customizable laboratory information systems help to address great variations in laboratory workflows, typical in Pathology. Often, however, built-in customization tools are not sufficient to add all of the desired functionality and improve systems interoperability. Emerging technologies and advances in medicine often create a void in functionality that we call a functionality gap. These gaps have distinct characteristics—a persuasive need to change the way a pathology group operates, the general availability of technology to address the missing functionality, the absence of this technology from your laboratory information system, and inability of built-in customization tools to address it. We emphasize the pervasive nature of these gaps, the role of pathology informatics in closing them, and suggest methods on how to achieve that. We found that a large number of the papers in the Journal of Pathology Informatics are concerned with these functionality gaps, and an even larger proportion of electronic posters and abstracts presented at the Pathology Informatics Summit conference each year deal directly with these unmet needs in pathology practice. A rapid, continuous, and sustainable approach to closing these gaps is critical for Pathology to provide the highest quality of care, adopt new technologies, and meet regulatory and financial challenges. The key element of successfully addressing functionality gaps is gap ownership—the ability to control the entire pathology information infrastructure with access to complementary systems and components. In addition, software developers with detailed domain expertise, equipped with right tools and methodology can effectively address these needs as they emerge.

  11. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  12. Patient-centered medical home cyberinfrastructure current and future landscape.

    PubMed

    Finkelstein, Joseph; Barr, Michael S; Kothari, Pranav P; Nace, David K; Quinn, Matthew

    2011-05-01

    The patient-centered medical home (PCMH) is an approach that evolved from the understanding that a well-organized, proactive clinical team working in a tandem with well-informed patients is better able to address the preventive and disease management needs in a guideline-concordant manner. This approach represents a fundamental shift from episodic acute care models and has become an integral part of health reform supported on a federal level. The major aspects of PCMH, especially pertinent to its information infrastructure, have been discussed by an expert panel organized by the Agency for Healthcare Research and Quality at the Informatics for Consumer Health Summit. The goal of this article is to summarize the panel discussions along the four major domains presented at the summit: (1) PCMH as an Evolving Model of Healthcare Delivery; (2) Health Information Technology (HIT) Applications to Support the PCMH; (3) Current HIT Landscape of PCMH: Challenges and Opportunities; and (4) Future HIT Landscape of PCMH: Federal Initiatives on Health Informatics, Legislation, and Standardization. Copyright © 2011 American Journal of Preventive Medicine. All rights reserved.

  13. Big heart data: advancing health informatics through data sharing in cardiovascular imaging.

    PubMed

    Suinesiaputra, Avan; Medrano-Gracia, Pau; Cowan, Brett R; Young, Alistair A

    2015-07-01

    The burden of heart disease is rapidly worsening due to the increasing prevalence of obesity and diabetes. Data sharing and open database resources for heart health informatics are important for advancing our understanding of cardiovascular function, disease progression and therapeutics. Data sharing enables valuable information, often obtained at considerable expense and effort, to be reused beyond the specific objectives of the original study. Many government funding agencies and journal publishers are requiring data reuse, and are providing mechanisms for data curation and archival. Tools and infrastructure are available to archive anonymous data from a wide range of studies, from descriptive epidemiological data to gigabytes of imaging data. Meta-analyses can be performed to combine raw data from disparate studies to obtain unique comparisons or to enhance statistical power. Open benchmark datasets are invaluable for validating data analysis algorithms and objectively comparing results. This review provides a rationale for increased data sharing and surveys recent progress in the cardiovascular domain. We also highlight the potential of recent large cardiovascular epidemiological studies enabling collaborative efforts to facilitate data sharing, algorithms benchmarking, disease modeling and statistical atlases.

  14. mHealth to revolutionize information retrieval in low and middle income countries: introduction and proposed solutions using Botswana as reference point.

    PubMed

    Littman-Quinn, Ryan; Luberti, Anthony A; Kovarik, Carrie

    2013-01-01

    Information retrieval (IR) practice is invaluable in health care, where the growth of medical knowledge has long surpassed human memory capabilities, and health care workers often have unmet information needs. While the information and communications technology (ICT) revolution is improving, IR in the Western world, the global digital divide has never been wider. Low and Middle Income Countries (LMICs) have the least advanced ICT infrastructure and service provision, and are also burdened with the majority of the world's health issues and severe shortages of health care workers. Initiatives utilizing mobile technology in healthcare and public health (mHealth) have shown potential at addressing these inequalities and challenges. Using Botswana as a reference point, this paper aims to broadly describe the healthcare and ICT challenges facing LMICs, the promise of mHealth as a field in health informatics, and then propose health informatics solutions that specifically address IR content and needs. One solution proposes utilizing Unstructured Supplementary Service Data (USSD) for accessing treatment guidelines, and the other solution outlines applications of smart devices for IR.

  15. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  16. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  17. Virtual Hubs for facilitating access to Open Data

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Latre, Miguel Á.; Ernst, Julia; Brumana, Raffaella; Brauman, Stefan; Nativi, Stefano

    2015-04-01

    In October 2014 the ENERGIC-OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". In ENERGIC-OD, Virtual Hubs are conceived as information systems supporting the full life cycle of Open Data: publishing, discovery and access. They facilitate the use of Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and data services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC-OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC-OD will integrate several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. ENERGIC OD will deploy a set of five Virtual Hubs (VHs) at national level in France, Germany, Italy, Poland, Spain and an additional one at the European level. VHs will be provided according to the cloud Software-as-a-Services model. The main expected impact of VHs is the creation of new business opportunities opening up access to Research Data and Public Sector Information. Therefore, ENERGIC-OD addresses not only end-users, who will have the opportunity to access the VH through a geo-portal, but also application developers who will be able to access VH functionalities through simple Application Programming Interfaces (API). ENERGIC-OD Consortium will develop ten different applications on top of the deployed VHs. They aim to demonstrate how VHs facilitate the development of new and multidisciplinary applications based on the full exploitation of (open) GI, hence stimulating innovation and business activities.

  18. Concordium 2015: Strategic Uses of Evidence to Transform Delivery Systems

    PubMed Central

    Holve, Erin; Weiss, Samantha

    2016-01-01

    In September 2015 the EDM Forum hosted AcademyHealth’s newest national conference, Concordium. The 11 papers featured in the eGEMs “Concordium 2015” special issue successfully reflect the major themes and issues discussed at the meeting. Many of the papers address informatics or methodological approaches to natural language processing (NLP) or text analysis, which is indicative of the importance of analyzing text data to gain insights into care coordination and patient-centered outcomes. Perspectives on the tools and infrastructure requirements that are needed to build learning health systems were also recurrent themes. PMID:27683671

  19. EU H2020 SERA: Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe

    NASA Astrophysics Data System (ADS)

    Giardini, Domenico; Saleh, Kauzar; SERA Consortium, the

    2017-04-01

    SERA - Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe - is a new infrastructure project awarded in the last Horizon 2020 call for Integrating Activities for Advanced Communities (INFRAIA-01-2016-2017). Building up on precursor projects like NERA, SHARE, NERIES, SERIES, etc., SERA is expected to contribute significantly to the access of data, services and research infrastructures, and to develop innovative solutions in seismology and earthquake engineering, with the overall objective of reducing the exposure to risks associated to natural and anthropogenic earthquakes. For instance, SERA will revise the European Seismic Hazard reference model for input in the current revision of the Eurocode 8 on Seismic Design of Buildings; we also foresee to develop the first comprehensive framework for seismic risk modeling at European scale, and to develop new standards for future experimental observations and instruments for earthquake engineering and seismology. To that aim, SERA is engaging 31 institutions across Europe with leading expertise in the operation of research facilities, monitoring infrastructures, data repositories and experimental facilities in the fields of seismology, anthropogenic hazards and earthquake engineering. SERA comprises 26 activities, including 5 Networking Activities (NA) to improve the availability and access of data through enhanced community coordination and pooling of resources, 6 Joint Research Activities (JRA) aimed at creating new European standards for the optimal use of the data collected by the European infrastructures, Virtual Access (VA) to the 5 main European services for seismology and engineering seismology, and Trans-national Access (TA) to 10 high-class experimental facilities for earthquake engineering and seismology in Europe. In fact, around 50% of the SERA resources will be dedicated to virtual and transnational access. SERA and EPOS (European Platform Observing System, a European Research Infrastructure Consortium for solid Earth services in Europe) will be developed in parallel, giving SERA the capacity to develop building blocks for EPOS in the areas of seismology, anthropogenic hazards and seismic engineering, such as new virtual access, new anthropogenic hazards products, expanded access to waveform data, etc. In addition, services developed and validated in SERA will be produced in a way that is compatible for integration in EPOS. This communication is aimed at informing the scientific community about the objectives and workplan of SERA, starting in spring 2017 for a duration of 3 years.

  20. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  1. Virtual health platform for medical tourism purposes.

    PubMed

    Martinez, Debora; Ferriol, Pedro; Tous, Xisco; Cabrer, Miguel; Prats, Mercedes

    2008-01-01

    This paper introduces an overview of the Virtual Health Platform (VHP), an alternative approach to create a functional PHR system in a medical tourism environment. The proposed platform has been designed in order to be integrated with EHR infrastructures and in this way it expects to be useful and more advantageous to the patient or tourist. Use cases of the VHP and its potential benefits summarize the analysis.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center propose a joint project. The goals are to enable scientific workflows of stakeholders to run on multiple cloud resources by use of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federat ion of Cloud Resources , and (c) High-Throughput Fabric Virtualization. This is a matching fund project in which Fermilab and KISTI will contribute equal resources .

  3. Affective medicine. A review of affective computing efforts in medical informatics.

    PubMed

    Luneski, A; Konstantinidis, E; Bamidis, P D

    2010-01-01

    Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as "computing that relates to, arises from, or deliberately influences emotions". AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field.

  4. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis

    PubMed Central

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153

  5. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis.

    PubMed

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.

  6. Modeling of luminance distribution in CAVE-type virtual reality systems

    NASA Astrophysics Data System (ADS)

    Meironke, Michał; Mazikowski, Adam

    2017-08-01

    At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.

  7. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  8. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  9. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    NASA Astrophysics Data System (ADS)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  10. A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models

    DTIC Science & Technology

    2013-10-01

    which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of

  11. Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.

  12. Virtualization in network and servers infrastructure to support dynamic system reconfiguration in ALMA

    NASA Astrophysics Data System (ADS)

    Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian

    2012-09-01

    ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.

  13. Crossing the health IT chasm: considerations and policy recommendations to overcome current challenges and enable value-based care.

    PubMed

    Adler-Milstein, Julia; Embi, Peter J; Middleton, Blackford; Sarkar, Indra Neil; Smith, Jeff

    2017-09-01

    While great progress has been made in digitizing the US health care system, today's health information technology (IT) infrastructure remains largely a collection of systems that are not designed to support a transition to value-based care. In addition, the pursuit of value-based care, in which we deliver better care with better outcomes at lower cost, places new demands on the health care system that our IT infrastructure needs to be able to support. Provider organizations pursuing new models of health care delivery and payment are finding that their electronic systems lack the capabilities needed to succeed. The result is a chasm between the current health IT ecosystem and the health IT ecosystem that is desperately needed.In this paper, we identify a set of focal goals and associated near-term achievable actions that are critical to pursue in order to enable the health IT ecosystem to meet the acute needs of modern health care delivery. These ideas emerged from discussions that occurred during the 2015 American Medical Informatics Association Policy Invitational Meeting. To illustrate the chasm and motivate our recommendations, we created a vignette from the multistakeholder perspectives of a patient, his provider, and researchers/innovators. It describes an idealized scenario in which each stakeholder's needs are supported by an integrated health IT environment. We identify the gaps preventing such a reality today and present associated policy recommendations that serve as a blueprint for critical actions that would enable us to cross the current health IT chasm by leveraging systems and information to routinely deliver high-value care. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. An online planetary exploration tool: ;Country Movers;

    NASA Astrophysics Data System (ADS)

    Gede, Mátyás; Hargitai, Henrik

    2017-08-01

    Results in astrogeologic investigations are rarely communicated towards the general public by maps despite the new advances in planetary spatial informatics and new spatial datasets in high resolution and more complete coverage. Planetary maps are typically produced by astrogeologists for other professionals, and not by cartographers for the general public. We report on an application designed for students, which uses cartography as framework to aid the virtual exploration of other planets and moons, using the concepts of size comparison and travel time calculation. We also describe educational activities that build on geographic knowledge and expand it to planetary surfaces.

  15. Robust Informatics Infrastructure Required For ICME: Combining Virtual and Experimental Data

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Holland, Frederic A. Jr.; Bednarcyk, Brett A.

    2014-01-01

    With the increased emphasis on reducing the cost and time to market of new materials, the need for robust automated materials information management system(s) enabling sophisticated data mining tools is increasing, as evidenced by the emphasis on Integrated Computational Materials Engineering (ICME) and the recent establishment of the Materials Genome Initiative (MGI). This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Further, the use of increasingly sophisticated nonlinear, anisotropic and or multi-scale models requires both the processing of large volumes of test data and complex materials data necessary to establish processing-microstructure-property-performance relationships. Fortunately, material information management systems have kept pace with the growing user demands and evolved to enable: (i) the capture of both point wise data and full spectra of raw data curves, (ii) data management functions such as access, version, and quality controls;(iii) a wide range of data import, export and analysis capabilities; (iv) data pedigree traceability mechanisms; (v) data searching, reporting and viewing tools; and (vi) access to the information via a wide range of interfaces. This paper discusses key principles for the development of a robust materials information management system to enable the connections at various length scales to be made between experimental data and corresponding multiscale modeling toolsets to enable ICME. In particular, NASA Glenn's efforts towards establishing such a database for capturing constitutive modeling behavior for both monolithic and composites materials

  16. Sensor, signal, and image informatics - state of the art and current topics.

    PubMed

    Lehmann, T M; Aach, T; Witte, H

    2006-01-01

    The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.

  17. Rheumatology Informatics System for Effectiveness: A National Informatics-Enabled Registry for Quality Improvement.

    PubMed

    Yazdany, Jinoos; Bansback, Nick; Clowse, Megan; Collier, Deborah; Law, Karen; Liao, Katherine P; Michaud, Kaleb; Morgan, Esi M; Oates, James C; Orozco, Catalina; Reimold, Andreas; Simard, Julia F; Myslinski, Rachel; Kazi, Salahuddin

    2016-12-01

    The Rheumatology Informatics System for Effectiveness (RISE) is a national electronic health record (EHR)-enabled registry. RISE passively collects data from EHRs of participating practices, provides advanced quality measurement and data analytic capacities, and fulfills national quality reporting requirements. Here we report the registry's architecture and initial data, and we demonstrate how RISE is being used to improve the quality of care. RISE is a certified Centers for Medicare and Medicaid Services Qualified Clinical Data Registry, allowing collection of data without individual patient informed consent. We analyzed data between October 1, 2014 and September 30, 2015 to characterize initial practices and patients captured in RISE. We also analyzed medication use among rheumatoid arthritis (RA) patients and performance on several quality measures. Across 55 sites, 312 clinicians contributed data to RISE; 72% were in group practice, 21% in solo practice, and 7% were part of a larger health system. Sites contributed data on 239,302 individuals. Among the subset with RA, 34.4% of patients were taking a biologic or targeted synthetic disease-modifying antirheumatic drug (DMARD) at their last encounter, and 66.7% were receiving a nonbiologic DMARD. Examples of quality measures include that 55.2% had a disease activity score recorded, 53.6% a functional status score, and 91.0% were taking a DMARD in the last year. RISE provides critical infrastructure for improving the quality of care in rheumatology and is a unique data source to generate new knowledge. Data validation and mapping are ongoing and RISE is available to the research and clinical communities to advance rheumatology. © 2016, American College of Rheumatology.

  18. TRIAD: The Translational Research Informatics and Data Management Grid

    PubMed Central

    Payne, P.; Ervin, D.; Dhaval, R.; Borlawsky, T.; Lai, A.

    2011-01-01

    Objective Multi-disciplinary and multi-site biomedical research programs frequently require infrastructures capable of enabling the collection, management, analysis, and dissemination of heterogeneous, multi-dimensional, and distributed data and knowledge collections spanning organizational boundaries. We report on the design and initial deployment of an extensible biomedical informatics platform that is intended to address such requirements. Methods A common approach to distributed data, information, and knowledge management needs in the healthcare and life science settings is the deployment and use of a service-oriented architecture (SOA). Such SOA technologies provide for strongly-typed, semantically annotated, and stateful data and analytical services that can be combined into data and knowledge integration and analysis “pipelines.” Using this overall design pattern, we have implemented and evaluated an extensible SOA platform for clinical and translational science applications known as the Translational Research Informatics and Data-management grid (TRIAD). TRIAD is a derivative and extension of the caGrid middleware and has an emphasis on supporting agile “working interoperability” between data, information, and knowledge resources. Results Based upon initial verification and validation studies conducted in the context of a collection of driving clinical and translational research problems, we have been able to demonstrate that TRIAD achieves agile “working interoperability” between distributed data and knowledge sources. Conclusion Informed by our initial verification and validation studies, we believe TRIAD provides an example instance of a lightweight and readily adoptable approach to the use of SOA technologies in the clinical and translational research setting. Furthermore, our initial use cases illustrate the importance and efficacy of enabling “working interoperability” in heterogeneous biomedical environments. PMID:23616879

  19. TRIAD: The Translational Research Informatics and Data Management Grid.

    PubMed

    Payne, P; Ervin, D; Dhaval, R; Borlawsky, T; Lai, A

    2011-01-01

    Multi-disciplinary and multi-site biomedical research programs frequently require infrastructures capable of enabling the collection, management, analysis, and dissemination of heterogeneous, multi-dimensional, and distributed data and knowledge collections spanning organizational boundaries. We report on the design and initial deployment of an extensible biomedical informatics platform that is intended to address such requirements. A common approach to distributed data, information, and knowledge management needs in the healthcare and life science settings is the deployment and use of a service-oriented architecture (SOA). Such SOA technologies provide for strongly-typed, semantically annotated, and stateful data and analytical services that can be combined into data and knowledge integration and analysis "pipelines." Using this overall design pattern, we have implemented and evaluated an extensible SOA platform for clinical and translational science applications known as the Translational Research Informatics and Data-management grid (TRIAD). TRIAD is a derivative and extension of the caGrid middleware and has an emphasis on supporting agile "working interoperability" between data, information, and knowledge resources. Based upon initial verification and validation studies conducted in the context of a collection of driving clinical and translational research problems, we have been able to demonstrate that TRIAD achieves agile "working interoperability" between distributed data and knowledge sources. Informed by our initial verification and validation studies, we believe TRIAD provides an example instance of a lightweight and readily adoptable approach to the use of SOA technologies in the clinical and translational research setting. Furthermore, our initial use cases illustrate the importance and efficacy of enabling "working interoperability" in heterogeneous biomedical environments.

  20. Virtual rehabilitation--benefits and challenges.

    PubMed

    Burdea, G C

    2003-01-01

    To discuss the advantages and disadvantages of rehabilitation applications of virtual reality. VR can be used as an enhancement to conventional therapy for patients with conditions ranging from musculoskeletal problems, to stroke-induced paralysis, to cognitive deficits. This approach is called "VR-augmented rehabilitation." Alternately, VR can replace conventional interventions altogether, in which case the rehabilitation is "VR-based." If the intervention is done at a distance, then it is called "telerehabilitation." Simulation exercises for post-stroke patients have been developed using a "teacher object" approach or a video game approach. Simulations for musculo-skeletal patients use virtual replicas of rehabilitation devices (such as rubber ball, power putty, peg board). Phobia-inducing virtual environments are prescribed for patients with cognitive deficits. VR-augmented rehabilitation has been shown effective for stroke patients in the chronic phase of the disease. VR-based rehabilitation has been improving patients with fear of flying, Vietnam syndrome, fear of heights, and chronic stroke patients. Telerehabilitation interventions using VR have improved musculo-skeletal and post-stroke patients, however less data is available at this time. Virtual reality presents significant advantages when applied to rehabilitation of patients with varied conditions. These advantages include patient motivation, adaptability and variability based on patient baseline, transparent data storage, online remote data access, economy of scale, reduced medical costs. Challenges in VR use for rehabilitation relate to lack of computer skills on the part of therapists, lack of support infrastructure, expensive equipment (initially), inadequate communication infrastructure (for telerehabilitation in rural areas), and patient safety concerns.

  1. Community-driven computational biology with Debian Linux.

    PubMed

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  2. Team building: electronic management-clinical translational research (eM-CTR) systems.

    PubMed

    Cecchetti, Alfred A; Parmanto, Bambang; Vecchio, Marcella L; Ahmad, Sjarif; Buch, Shama; Zgheib, Nathalie K; Groark, Stephen J; Vemuganti, Anupama; Romkes, Marjorie; Sciurba, Frank; Donahoe, Michael P; Branch, Robert A

    2009-12-01

    Classical drug exposure: response studies in clinical pharmacology represent the quintessential prototype for Bench to Bedside-Clinical Translational Research. A fundamental premise of this approach is for a multidisciplinary team of researchers to design and execute complex, in-depth mechanistic studies conducted in relatively small groups of subjects. The infrastructure support for this genre of clinical research is not well-handled by scaling down of infrastructure used for large Phase III clinical trials. We describe a novel, integrated strategy, whose focus is to support and manage a study using an Information Hub, Communication Hub, and Data Hub design. This design is illustrated by an application to a series of varied projects sponsored by Special Clinical Centers of Research in chronic obstructive pulmonary disease at the University of Pittsburgh. In contrast to classical informatics support, it is readily scalable to large studies. Our experience suggests the culture consequences of research group self-empowerment is not only economically efficient but transformative to the research process.

  3. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  4. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    DTIC Science & Technology

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  5. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  6. Can openEHR archetypes be used in a national context? The Danish archetype proof-of-concept project.

    PubMed

    Bernstein, Knut; Tvede, Ida; Petersen, Jan; Bredegaard, Kirsten

    2009-01-01

    Semantic interoperability and secondary use of data are important informatics challenges in modern healthcare. Connected Digital Health Denmark is investigating if the openEHR reference model, archetypes and templates could be used for representing and exchanging clinical content specification and could become a candidate for a national logical infrastructure for semantic interoperability. The Danish archetype proof-of-concept project has tried out some elements of the openEHR methodology in cooperation with regions and vendors. The project has pointed out benefits and challenges using archetypes, and has identified barriers that need to be addressed in the next steps.

  7. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  8. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  9. Incorporating collaboratory concepts into informatics in support of translational interdisciplinary biomedical research

    PubMed Central

    Lee, E. Sally; McDonald, David W.; Anderson, Nicholas; Tarczy-Hornoch, Peter

    2008-01-01

    Due to its complex nature, modern biomedical research has become increasingly interdisciplinary and collaborative in nature. Although a necessity, interdisciplinary biomedical collaboration is difficult. There is, however, a growing body of literature on the study and fostering of collaboration in fields such as computer supported cooperative work (CSCW) and information science (IS). These studies of collaboration provide insight into how to potentially alleviate the difficulties of interdisciplinary collaborative research. We, therefore, undertook a cross cutting study of science and engineering collaboratories to identify emergent themes. We review many relevant collaboratory concepts: (a) general collaboratory concepts across many domains: communication, common workspace and coordination, and data sharing and management, (b) specific collaboratory concepts of particular biomedical relevance: data integration and analysis, security structure, metadata and data provenance, and interoperability and data standards, (c) environmental factors that support collaboratories: administrative and management structure, technical support, and available funding as critical environmental factors, and (d) future considerations for biomedical collaboration: appropriate training and long-term planning. In our opinion, the collaboratory concepts we discuss can guide planning and design of future collaborative infrastructure by biomedical informatics researchers to alleviate some of the difficulties of interdisciplinary biomedical collaboration. PMID:18706852

  10. The Function Biomedical Informatics Research Network Data Repository

    PubMed Central

    Keator, David B.; van Erp, Theo G.M.; Turner, Jessica A.; Glover, Gary H.; Mueller, Bryon A.; Liu, Thomas T.; Voyvodic, James T.; Rasmussen, Jerod; Calhoun, Vince D.; Lee, Hyo Jong; Toga, Arthur W.; McEwen, Sarah; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Bockholt, H. Jeremy; Gadde, Syam; Preda, Adrian; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.

    2015-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical datasets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 dataset consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 Tesla scanners. The FBIRN Phase 2 and Phase 3 datasets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN’s multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. PMID:26364863

  11. Towards a Framework for Evaluating Mobile Mental Health Apps.

    PubMed

    Chan, Steven; Torous, John; Hinton, Ladson; Yellowlees, Peter

    2015-12-01

    Mobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps. Guidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics. We propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow. Ultimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.

  12. Building better connections: the National Library of Medicine and public health.

    PubMed

    Humphreys, Betsy L

    2007-07-01

    The paper describes the expansion of the public health programs and services of the National Library of Medicine (NLM) in the 1990s and provides the context in which NLM's public health outreach programs arose and exist today. Although NLM has always had collections and services relevant to public health, the US public health workforce made relatively little use of the library's information services and programs in the twentieth century. In the 1990s, intensified emphases on outreach to health professionals, building national information infrastructure, and promoting health data standards provided NLM with new opportunities to reach the public health community. A seminal conference cosponsored by NLM in 1995 produced an agenda for improving public health access to and use of advanced information technology and electronic information services. NLM actively pursued this agenda by developing new services and outreach programs and promoting public health informatics initiatives. Historical analysis is presented. NLM took advantage of a propitious environment to increase visibility and understanding of public health information challenges and opportunities. The library helped create partnerships that produced new information services, outreach initiatives, informatics innovations, and health data policies that benefit the public health workforce and the diverse populations it serves.

  13. Enabling long-term oceanographic research: Changing data practices, information management strategies and informatics

    NASA Astrophysics Data System (ADS)

    Baker, Karen S.; Chandler, Cynthia L.

    2008-09-01

    Interdisciplinary global ocean science requires new ways of thinking about data and data management. With new data policies and growing technological capabilities, datasets of increasing variety and complexity are being made available digitally and data management is coming to be recognized as an integral part of scientific research. To meet the changing expectations of scientists collecting data and of data reuse by others, collaborative strategies involving diverse teams of information professionals are developing. These changes are stimulating the growth of information infrastructures that support multi-scale sampling, data repositories, and data integration. Two examples of oceanographic projects incorporating data management in partnership with science programs are discussed: the Palmer Station Long-Term Ecological Research program (Palmer LTER) and the United States Joint Global Ocean Flux Study (US JGOFS). Lessons learned from a decade of data management within these communities provide an experience base from which to develop information management strategies—short-term and long-term. Ocean Informatics provides one example of a conceptual framework for managing the complexities inherent to sharing oceanographic data. Elements are introduced that address the economies-of-scale and the complexities-of-scale pertinent to a broader vision of information management and scientific research.

  14. Clinical Research Informatics for Big Data and Precision Medicine.

    PubMed

    Weng, C; Kahn, M G

    2016-11-10

    To reflect on the notable events and significant developments in Clinical Research Informatics (CRI) in the year of 2015 and discuss near-term trends impacting CRI. We selected key publications that highlight not only important recent advances in CRI but also notable events likely to have significant impact on CRI activities over the next few years or longer, and consulted the discussions in relevant scientific communities and an online living textbook for modern clinical trials. We also related the new concepts with old problems to improve the continuity of CRI research. The highlights in CRI in 2015 include the growing adoption of electronic health records (EHR), the rapid development of regional, national, and global clinical data research networks for using EHR data to integrate scalable clinical research with clinical care and generate robust medical evidence. Data quality, integration, and fusion, data access by researchers, study transparency, results reproducibility, and infrastructure sustainability are persistent challenges. The advances in Big Data Analytics and Internet technologies together with the engagement of citizens in sciences are shaping the global clinical research enterprise, which is getting more open and increasingly stakeholder-centered, where stakeholders include patients, clinicians, researchers, and sponsors.

  15. Clinical Research Informatics for Big Data and Precision Medicine

    PubMed Central

    Kahn, M. G.

    2016-01-01

    Summary Objectives To reflect on the notable events and significant developments in Clinical Research Informatics (CRI) in the year of 2015 and discuss near-term trends impacting CRI. Methods We selected key publications that highlight not only important recent advances in CRI but also notable events likely to have significant impact on CRI activities over the next few years or longer, and consulted the discussions in relevant scientific communities and an online living textbook for modern clinical trials. We also related the new concepts with old problems to improve the continuity of CRI research. Results The highlights in CRI in 2015 include the growing adoption of electronic health records (EHR), the rapid development of regional, national, and global clinical data research networks for using EHR data to integrate scalable clinical research with clinical care and generate robust medical evidence. Data quality, integration, and fusion, data access by researchers, study transparency, results reproducibility, and infrastructure sustainability are persistent challenges. Conclusion The advances in Big Data Analytics and Internet technologies together with the engagement of citizens in sciences are shaping the global clinical research enterprise, which is getting more open and increasingly stakeholder-centered, where stakeholders include patients, clinicians, researchers, and sponsors. PMID:27830253

  16. The distributed agent-based approach in the e-manufacturing environment

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Kost, G.; Dobrzańska-Danikiewicz, A.; Banaś, W.; Foit, K.

    2015-11-01

    The deficiency of a coherent flow of information from a production department causes unplanned downtime and failures of machines and their equipment, which in turn results in production planning process based on incorrect and out-of-date information. All of these factors entail, as the consequence, the additional difficulties associated with the process of decision-making. They concern, among other, the coordination of components of a distributed system and providing the access to the required information, thereby generating unnecessary costs. The use of agent technology significantly speeds up the flow of information within the virtual enterprise. This paper includes the proposal of a multi-agent approach for the integration of processes within the virtual enterprise concept. The presented concept was elaborated to investigate the possible solutions of the ways of transmission of information in the production system taking into account the self-organization of constituent components. Thus it implicated the linking of the concept of multi-agent system with the system of managing the production information, based on the idea of e-manufacturing. The paper presents resulting scheme that should be the base for elaborating an informatics model of the target virtual system. The computer system itself is intended to be developed next.

  17. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  18. Building a Generic Virtual Research Environment Framework for Multiple Earth and Space Science Domains and a Diversity of Users.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.

    2017-12-01

    Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.

  19. California's state oral health infrastructure: opportunities for improvement and funding.

    PubMed

    Diringer, Joel; Phipps, Kathy R

    2012-01-01

    California has virtually no statewide dental public health infrastructure leaving the state without leadership, a surveillance program, an oral health plan, oral health promotion and disease prevention programs, and federal funding. Based on a literature review and interviews with 15 oral health officials nationally, the paper recommends hiring a state dental director with public health experience, developing a state oral health plan, and seeking federal and private funding to support an office of oral health.

  20. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  1. EVER-EST: a virtual research environment for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Marelli, Fulvio; Albani, Mirko; Glaves, Helen

    2016-04-01

    There is an increasing requirement for researchers to work collaboratively using common resources whilst being geographically dispersed. By creating a virtual research environment (VRE) using a service oriented architecture (SOA) tailored to the needs of Earth Science (ES) communities, the EVEREST project will provide a range of both generic and domain specific data management services to support a dynamic approach to collaborative research. EVER-EST will provide the means to overcome existing barriers to sharing of Earth Science data and information allowing research teams to discover, access, share and process heterogeneous data, algorithms, results and experiences within and across their communities, including those domains beyond Earth Science. Researchers will be able to seamlessly manage both the data involved in their computationally intensive disciplines and the scientific methods applied in their observations and modelling, which lead to the specific results that need to be attributable, validated and shared both within the community and more widely e.g. in the form of scholarly communications. Central to the EVEREST approach is the concept of the Research Object (RO) , which provides a semantically rich mechanism to aggregate related resources about a scientific investigation so that they can be shared together using a single unique identifier. Although several e-laboratories are incorporating the research object concept in their infrastructure, the EVER-EST VRE will be the first infrastructure to leverage the concept of Research Objects and their application in observational rather than experimental disciplines. Development of the EVEREST VRE will leverage the results of several previous projects which have produced state-of-the-art technologies for scientific data management and curation as well those which have developed models, techniques and tools for the preservation of scientific methods and their implementation in computational forms such as scientific workflows. The EVER-EST data processing infrastructure will be based on a Cloud Computing approach, in which new applications can be integrated using "virtual machines" that have their own specifications (disk size, processor speed, operating system etc.) and run on shared private (physical deployment over local hardware) or commercial Cloud infrastructures. The EVER-EST e-infrastructure will be validated by four virtual research communities (VRC) covering different multidisciplinary Earth Science domains including: ocean monitoring, natural hazards, land monitoring and risk management (volcanoes and seismicity). Each VRC will use the virtual research environment according to its own specific requirements for data, software, best practice and community engagement. This user-centric approach will allow an assessment to be made of the capability for the proposed solution to satisfy the heterogeneous needs of a variety of Earth Science communities for more effective collaboration, and higher efficiency and creativity in research. EVER-EST is funded by the European Commission's H2020 for three years starting in October 2015. The project is led by the European Space Agency (ESA), involves some of the major European Earth Science data providers/users including NERC, DLR, INGV, CNR and SatCEN.

  2. A knowledge infrastructure for occupational safety and health.

    PubMed

    van Dijk, Frank J H; Verbeek, Jos H; Hoving, Jan L; Hulshof, Carel T J

    2010-12-01

    Occupational Safety and Health (OSH) professionals should use scientific evidence to support their decisions in policy and practice. Although examples from practice show that progress has been made in evidence-based decision making, there is a challenge to improve and extend the facilities that support knowledge translation in practice. A knowledge infrastructure that supports OSH practice should include scientific research, systematic reviews, practice guidelines, and other tools for professionals such as well accessible virtual libraries and databases providing knowledge, quality tools, and good learning materials. A good infrastructure connects facilities with each other and with practice. Training and education is needed for OSH professionals in the use of evidence to improve effectiveness and efficiency. New initiatives show that occupational health can profit from intensified international collaboration to establish a good functioning knowledge infrastructure.

  3. UkrVO astronomical WEB services

    NASA Astrophysics Data System (ADS)

    Mazhaev, A.

    2017-02-01

    Ukraine Virtual Observatory (UkrVO) has been a member of the International Virtual Observatory Alliance (IVOA) since 2011. The virtual observatory (VO) is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS) of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.

  4. International practices on climate adaptation in transportation : findings from a virtual review

    DOT National Transportation Integrated Search

    2015-01-01

    The Federal Highway Administration (FHWA) conducted an international review to study how international transportation agencies are addressing issues related to adapting highway infrastructure to the impacts of climate change. The review involved tran...

  5. Audited credential delegation: a usable security solution for the virtual physiological human toolkit.

    PubMed

    Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S

    2011-06-06

    We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.

  6. Audited credential delegation: a usable security solution for the virtual physiological human toolkit

    PubMed Central

    Haidar, Ali N.; Zasada, Stefan J.; Coveney, Peter V.; Abdallah, Ali E.; Beckles, Bruce; Jones, Mike A. S.

    2011-01-01

    We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. PMID:22670214

  7. A Chemocentric Informatics Approach to Drug Discovery: Identification and Experimental Validation of Selective Estrogen Receptor Modulators as ligands of 5-Hydroxytryptamine-6 Receptors and as Potential Cognition Enhancers

    PubMed Central

    Hajjo, Rima; Setola, Vincent; Roth, Bryan L.; Tropsha, Alexander

    2012-01-01

    We have devised a chemocentric informatics methodology for drug discovery integrating independent approaches to mining biomolecular databases. As a proof of concept, we have searched for novel putative cognition enhancers. First, we generated Quantitative Structure- Activity Relationship (QSAR) models of compounds binding to 5-hydroxytryptamine-6 receptor (5HT6R), a known target for cognition enhancers, and employed these models for virtual screening to identify putative 5-HT6R actives. Second, we queried chemogenomics data from the Connectivity Map (http://www.broad.mit.edu/cmap/) with the gene expression profile signatures of Alzheimer’s disease patients to identify compounds putatively linked to the disease. Thirteen common hits were tested in 5-HT6R radioligand binding assays and ten were confirmed as actives. Four of them were known selective estrogen receptor modulators that were never reported as 5-HT6R ligands. Furthermore, nine of the confirmed actives were reported elsewhere to have memory-enhancing effects. The approaches discussed herein can be used broadly to identify novel drug-target-disease associations. PMID:22537153

  8. Cognition, interaction and ageing: an Internet workshops exploratory study.

    PubMed

    Xavier, André; Sales, Márcia; Ramos, Luiz; Anção, Meide; Sigulem, Daniel

    2004-01-01

    Gerontology is known more and more as an interdisciplinary and functional knowledge. Geriatrics as one of its branches intends to make possible longevity with health. World population ageing occurs along with important economical and social inequalities for elder people, which are likely to be more affected by deficiencies (physical and/or cognitive) than younger groups. With the purpose of minimizing these deficiencies, Internet Workshops were accomplished, with a retired group of senior persons. This research was developed to bring together principles of Human-Computer Interaction, informatics, accessibility and gerontology in order to promote Digital Inclusion to this growing population and a methodology to virtual cognitive rehabilitation.

  9. Telemedicine and distributed medical intelligence.

    PubMed

    Warner, D; Tichenor, J M; Balch, D C

    1996-01-01

    Recent trends in health care informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. The authors present a new model of health care information, distributed medical intelligence, which promotes the development of an integrative medical communication system addressing the process of providing expert medical knowledge to the point of need. The model incorporates audio, video, high-resolution still images, and virtual reality applications into an integrated medical communications network. Three components of the model (care portals, Docking Station, and the bridge) are described. The implementation of this model at the East Carolina University School of Medicine is also outlined.

  10. Galaxy CloudMan: delivering cloud compute clusters.

    PubMed

    Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James

    2010-12-21

    Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.

  11. Developing translational research infrastructure and capabilities associated with cancer clinical trials.

    PubMed

    Hall, Jacqueline A; Brown, Robert

    2013-09-27

    The integration of molecular information in clinical decision making is becoming a reality. These changes are shaping the way clinical research is conducted, and as reality sets in, the challenges in conducting, managing and organising multi-disciplinary research become apparent. Clinical trials provide a platform to conduct translational research (TR) within the context of high quality clinical data accrual. Integrating TR objectives in trials allows the execution of pivotal studies that provide clinical evidence for biomarker-driven treatment strategies, targeting early drug development trials to a homogeneous and well defined patient population, supports the development of companion diagnostics and provides an opportunity for deepening our understanding of cancer biology and mechanisms of drug action. To achieve these goals within a clinical trial, developing translational research infrastructure and capabilities (TRIC) plays a critical catalytic role for translating preclinical data into successful clinical research and development. TRIC represents a technical platform, dedicated resources and access to expertise promoting high quality standards, logistical and operational support and unified streamlined procedures under an appropriate governance framework. TRIC promotes integration of multiple disciplines including biobanking, laboratory analysis, molecular data, informatics, statistical analysis and dissemination of results which are all required for successful TR projects and scientific progress. Such a supporting infrastructure is absolutely essential in order to promote high quality robust research, avoid duplication and coordinate resources. Lack of such infrastructure, we would argue, is one reason for the limited effect of TR in clinical practice beyond clinical trials.

  12. Galaxy CloudMan: delivering cloud compute clusters

    PubMed Central

    2010-01-01

    Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983

  13. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  14. Nursing Informatics Certification Worldwide: History, Pathway, Roles, and Motivation

    PubMed Central

    Cummins, M. R.; Gundlapalli, A. V.; Murray, P.; Park, H.-A.; Lehmann, C. U.

    2016-01-01

    Summary Introduction Official recognition and certification for informatics professionals are essential aspects of workforce development. Objective: To describe the history, pathways, and nuances of certification in nursing informatics across the globe; compare and contrast those with board certification in clinical informatics for physicians. Methods (1) A review of the representative literature on informatics certification and related competencies for nurses and physicians, and relevant websites for nursing informatics associations and societies worldwide; (2) similarities and differences between certification processes for nurses and physicians, and (3) perspectives on roles for nursing informatics professionals in healthcare Results The literature search for ‘nursing informatics certification’ yielded few results in PubMed; Google Scholar yielded a large number of citations that extended to magazines and other non-peer reviewed sources. Worldwide, there are several nursing informatics associations, societies, and workgroups dedicated to nursing informatics associated with medical/health informatics societies. A formal certification program for nursing informatics appears to be available only in the United States. This certification was established in 1992, in concert with the formation and definition of nursing informatics as a specialty practice of nursing by the American Nurses Association. Although informatics is inherently interprofessional, certification pathways for nurses and physicians have developed separately, following long-standing professional structures, training, and pathways aligned with clinical licensure and direct patient care. There is substantial similarity with regard to the skills and competencies required for nurses and physicians to obtain informatics certification in their respective fields. Nurses may apply for and complete a certification examination if they have experience in the field, regardless of formal training. Increasing numbers of informatics nurses are pursuing certification. Conclusions The pathway to certification is clear and well-established for U.S. based informatics nurses. The motivation for obtaining and maintaining nursing informatics certification appears to be stronger for nurses who do not have an advanced informatics degree. The primary difference between nursing and physician certification pathways relates to the requirement of formal training and level of informatics practice. Nurse informatics certification requires no formal education or training and verifies knowledge and skill at a more basic level. Physician informatics certification validates informatics knowledge and skill at a more advanced level; currently this requires documentation of practice and experience in clinical informatics and in the future will require successful completion of an accredited two-year fellowship in clinical informatics. For the profession of nursing, a graduate degree in nursing or biomedical informatics validates specialty knowledge at a level more comparable to the physician certification. As the field of informatics and its professional organization structures mature, a common certification pathway may be appropriate. Nurses, physicians, and other healthcare professionals with informatics training and certification are needed to contribute their expertise in clinical operations, teaching, research, and executive leadership. PMID:27830261

  15. Dynamic Virtual Credit Card Numbers

    NASA Astrophysics Data System (ADS)

    Molloy, Ian; Li, Jiangtao; Li, Ninghui

    Theft of stored credit card information is an increasing threat to e-commerce. We propose a dynamic virtual credit card number scheme that reduces the damage caused by stolen credit card numbers. A user can use an existing credit card account to generate multiple virtual credit card numbers that are either usable for a single transaction or are tied with a particular merchant. We call the scheme dynamic because the virtual credit card numbers can be generated without online contact with the credit card issuers. These numbers can be processed without changing any of the infrastructure currently in place; the only changes will be at the end points, namely, the card users and the card issuers. We analyze the security requirements for dynamic virtual credit card numbers, discuss the design space, propose a scheme using HMAC, and prove its security under the assumption the underlying function is a PRF.

  16. Development of Armenian-Georgian Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg; Kochiashvili, Nino; Astsatryan, Hrach; Harutyunian, Haik; Magakyan, Tigran; Chargeishvili, Ketevan; Natsvlishvili, Rezo; Kukhianidze, Vasil; Ramishvili, Giorgi; Sargsyan, Lusine; Sinamyan, Parandzem; Kochiashvili, Ia; Mikayelyan, Gor

    2009-10-01

    The Armenian-Georgian Virtual Observatory (ArGVO) project is the first initiative in the world to create a regional VO infrastructure based on national VO projects and regional Grid. The Byurakan and Abastumani Astrophysical Observatories are scientific partners since 1946, after establishment of the Byurakan observatory . The Armenian VO project (ArVO) is being developed since 2005 and is a part of the International Virtual Observatory Alliance (IVOA). It is based on the Digitized First Byurakan Survey (DFBS, the digitized version of famous Markarian survey) and other Armenian archival data. Similarly, the Georgian VO will be created to serve as a research environment to utilize the digitized Georgian plate archives. Therefore, one of the main goals for creation of the regional VO is the digitization of large amounts of plates preserved at the plate stacks of these two observatories. The total amount of plates is more than 100,000 units. Observational programs of high importance have been selected and some 3000 plates will be digitized during the next two years; the priority is being defined by the usefulness of the material for future science projects, like search for new objects, optical identifications of radio, IR, and X-ray sources, study of variability and proper motions, etc. Having the digitized material in VO standards, a VO database through the regional Grid infrastructure will be active. This partnership is being carried out in the framework of the ISTC project A-1606 "Development of Armenian-Georgian Grid Infrastructure and Applications in the Fields of High Energy Physics, Astrophysics and Quantum Physics".

  17. Nanoinformatics workshop report: Current resources, community needs, and the proposal of a collaborative framework for data sharing and information integration.

    PubMed

    Harper, Stacey L; Hutchison, James E; Baker, Nathan; Ostraat, Michele; Tinkle, Sally; Steevens, Jeffrey; Hoover, Mark D; Adamick, Jessica; Rajan, Krishna; Gaheen, Sharon; Cohen, Yoram; Nel, Andre; Cachau, Raul E; Tuominen, Mark

    2013-01-01

    The quantity of information on nanomaterial properties and behavior continues to grow rapidly. Without a concerted effort to collect, organize and mine disparate information coming out of current research efforts, the value and effective use of this information will be limited at best. Data will not be translated to knowledge. At worst, erroneous conclusions will be drawn and future research may be misdirected. Nanoinformatics can be a powerful approach to enhance the value of global information in nanoscience and nanotechnology. Much progress has been made through grassroots efforts in nanoinformatics resulting in a multitude of resources and tools for nanoscience researchers. In 2012, the nanoinformatics community believed it was important to critically evaluate and refine currently available nanoinformatics approaches in order to best inform the science and support the future of predictive nanotechnology. The Greener Nano 2012: Nanoinformatics Tools and Resources Workshop brought together informatics groups with materials scientists active in nanoscience research to evaluate and reflect on the tools and resources that have recently emerged in support of predictive nanotechnology. The workshop goals were to establish a better understanding of current nanoinformatics approaches and to clearly define immediate and projected informatics infrastructure needs of the nanotechnology community. The theme of nanotechnology environmental health and safety (nanoEHS) was used to provide real-world, concrete examples on how informatics can be utilized to advance our knowledge and guide nanoscience. The benefit here is that the same properties that impact the performance of products could also be the properties that inform EHS. From a decision management standpoint, the dual use of such data should be considered a priority. Key outcomes include a proposed collaborative framework for data collection, data sharing and information integration.

  18. Bridging Informatics and Earth Science: a Look at Gregory Leptoukh's Contributions

    NASA Technical Reports Server (NTRS)

    2012-01-01

    With the tragic passing this year of Gregory Leptoukh, the Earth and Space Sciences community lost a tireless participant in--and advocate for--science informatics. Throughout his career at NASA, Dr. Leptoukh established a theme of bridging the gulf between the informatics and science communities. Nowhere is this more evident than his leadership in the development of Giovanni (GES DISC Interactive Online Visualization ANd aNalysis Infrastructure). Giovanni is an online tool that serves to hide the often-complex technical details of data format and structure, making science data easier to explore and use by Earth scientists. To date Giovanni has been acknowledged as a contributor in 500-odd scientific articles. In recent years, Leptoukh concentrated his efforts on multi-sensor data inter-comparison, merging and fusion. This work exposed several challenges at the intersection of data and science. One of these was the ease with which a naive user might generate spurious comparisons, a potential hazard that was the genesis of the Multi-sensor Data Synergy Advisor (MDSA). The MDSA uses semantic ontologies and inference rules to organize knowledge about dataset quality and other salient characteristics in order to advise users on potential caveats for comparing or merging two datasets. Recently, Leptoukh also led the development of AeroStat, an online Giovanni instance to investigate aerosols via statistics from station and satellite comparisons and merged maps of data from more than one instrument. Aerostat offers a neural net based bias adjustment to harmonize the data by removing systematic offsets between datasets before merging. These examples exhibit Leptoukh's talent for adopting advanced computer technologies in the service of making science data more accessible to researchers. In this, he set an example that is at once both vital and challenging for the ESSI community to emulate.

  19. A proposed national research and development agenda for population health informatics: summary recommendations from a national expert workshop

    PubMed Central

    Lasser, Elyse C; Yasnoff, William A; Loonsk, John; Advani, Aneel; Lehmann, Harold P; Chin, David C; Weiner, Jonathan P

    2017-01-01

    Objective: The Johns Hopkins Center for Population Health IT hosted a 1-day symposium sponsored by the National Library of Medicine to help develop a national research and development (R&D) agenda for the emerging field of population health informatics (PopHI). Material and Methods: The symposium provided a venue for national experts to brainstorm, identify, discuss, and prioritize the top challenges and opportunities in the PopHI field, as well as R&D areas to address these. Results: This manuscript summarizes the findings of the PopHI symposium. The symposium participants’ recommendations have been categorized into 13 overarching themes, including policy alignment, data governance, sustainability and incentives, and standards/interoperability. Discussion: The proposed consensus-based national agenda for PopHI consisted of 18 priority recommendations grouped into 4 broad goals: (1) Developing a standardized collaborative framework and infrastructure, (2) Advancing technical tools and methods, (3) Developing a scientific evidence and knowledge base, and (4) Developing an appropriate framework for policy, privacy, and sustainability. There was a substantial amount of agreement between all the participants on the challenges and opportunities for PopHI as well as on the actions that needed to be taken to address these. Conclusion: PopHI is a rapidly growing field that has emerged to address the population dimension of the Triple Aim. The proposed PopHI R&D agenda is comprehensive and timely, but should be considered only a starting-point, given that ongoing developments in health policy, population health management, and informatics are very dynamic, suggesting that the agenda will require constant monitoring and updating. PMID:27018264

  20. Building the capacity to build capacity in e-health in sub-Saharan Africa: the KwaZulu-Natal experience.

    PubMed

    Mars, Maurice

    2012-01-01

    Sub-Saharan Africa has a disproportionate burden of disease and an extreme shortage of health workers. There are already too few doctors to train doctors in specialities and sub-specialties. E-health is seen as a possible solution through distance education, telemedicine, and computerized health information systems but there are few people trained in e-health. We describe 12 years of experience at the University of KwaZulu-Natal (UKZ-N) in education and training in postgraduate medical disciplines, medical informatics, and telemedicine. Videoconferencing of seminars and grand rounds to regional training hospitals commenced in 2001 and has grown to 40 h of interactive conferencing taking place weekly during academic terms involving over 33,000 participants in 2010. Videoconferenced sessions are directly recorded to DVD and DVDs are sent to other medical schools in Africa that do not have the infrastructure to directly connect. E-HEALTH EDUCATION: Students and academic staff were initially sent to the United States for training in medical informatics and workshops were held in South Africa for people from sub-Saharan Africa. This led to the development of postgraduate academic programs in medical informatics and telemedicine at UKZ-N. African students were then brought to UKZ-N for training. The model was changed from UKZ-N to students and staff based at their home universities with the aim of building capacity in the staff at partner institutions so that they can in time offer their own e-health academic programs. The need for capacity development in all aspects of e-health in sub-Saharan Africa is great and innovative solutions are required.

  1. Bridging Informatics and Earth Science: a Look at Gregory Leptoukh's Contributions

    NASA Astrophysics Data System (ADS)

    Lynnes, C.

    2012-12-01

    With the tragic passing this year of Gregory Leptoukh, the Earth and Space Sciences community lost a tireless participant in--and advocate for--science informatics. Throughout his career at NASA, Dr. Leptoukh established a theme of bridging the gulf between the informatics and science communities. Nowhere is this more evident than his leadership in the development of Giovanni (GES DISC Interactive Online Visualization ANd aNalysis Infrastructure). Giovanni is an online tool that serves to hide the often-complex technical details of data format and structure, making science data easier to explore and use by Earth scientists. To date Giovanni has been acknowledged as a contributor in 500-odd scientific articles. In recent years, Leptoukh concentrated his efforts on multi-sensor data inter-comparison, merging and fusion. This work exposed several challenges at the intersection of data and science. One of these was the ease with which a naive user might generate spurious comparisons, a potential hazard that was the genesis of the Multi-sensor Data Synergy Advisor (MDSA). The MDSA uses semantic ontologies and inference rules to organize knowledge about dataset quality and other salient characteristics in order to advise users on potential caveats for comparing or merging two datasets. Recently, Leptoukh also led the development of AeroStat, an online Giovanni instance to investigate aerosols via statistics from station and satellite comparisons and merged maps of data from more than one instrument. Aerostat offers a neural net based bias adjustment to "harmonize" the data by removing systematic offsets between datasets before merging. These examples exhibit Leptoukh's talent for adopting advanced computer technologies in the service of making science data more accessible to researchers. In this, he set an example that is at once both vital and challenging for the ESSI community to emulate.

  2. Nanoinformatics workshop report: Current resources, community needs, and the proposal of a collaborative framework for data sharing and information integration

    PubMed Central

    Harper, Stacey L; Hutchison, James E; Baker, Nathan; Ostraat, Michele; Tinkle, Sally; Steevens, Jeffrey; Hoover, Mark D; Adamick, Jessica; Rajan, Krishna; Gaheen, Sharon; Cohen, Yoram; Nel, Andre; Cachau, Raul E; Tuominen, Mark

    2014-01-01

    The quantity of information on nanomaterial properties and behavior continues to grow rapidly. Without a concerted effort to collect, organize and mine disparate information coming out of current research efforts, the value and effective use of this information will be limited at best. Data will not be translated to knowledge. At worst, erroneous conclusions will be drawn and future research may be misdirected. Nanoinformatics can be a powerful approach to enhance the value of global information in nanoscience and nanotechnology. Much progress has been made through grassroots efforts in nanoinformatics resulting in a multitude of resources and tools for nanoscience researchers. In 2012, the nanoinformatics community believed it was important to critically evaluate and refine currently available nanoinformatics approaches in order to best inform the science and support the future of predictive nanotechnology. The Greener Nano 2012: Nanoinformatics Tools and Resources Workshop brought together informatics groups with materials scientists active in nanoscience research to evaluate and reflect on the tools and resources that have recently emerged in support of predictive nanotechnology. The workshop goals were to establish a better understanding of current nanoinformatics approaches and to clearly define immediate and projected informatics infrastructure needs of the nanotechnology community. The theme of nanotechnology environmental health and safety (nanoEHS) was used to provide real-world, concrete examples on how informatics can be utilized to advance our knowledge and guide nanoscience. The benefit here is that the same properties that impact the performance of products could also be the properties that inform EHS. From a decision management standpoint, the dual use of such data should be considered a priority. Key outcomes include a proposed collaborative framework for data collection, data sharing and information integration. PMID:24454543

  3. Building the Capacity to Build Capacity in e-Health in Sub-Saharan Africa: The KwaZulu-Natal Experience

    PubMed Central

    2012-01-01

    Abstract Background: Sub-Saharan Africa has a disproportionate burden of disease and an extreme shortage of health workers. There are already too few doctors to train doctors in specialities and sub-specialties. E-health is seen as a possible solution through distance education, telemedicine, and computerized health information systems but there are few people trained in e-health. We describe 12 years of experience at the University of KwaZulu-Natal (UKZ-N) in education and training in postgraduate medical disciplines, medical informatics, and telemedicine. Medical Education: Videoconferencing of seminars and grand rounds to regional training hospitals commenced in 2001 and has grown to 40 h of interactive conferencing taking place weekly during academic terms involving over 33,000 participants in 2010. Videoconferenced sessions are directly recorded to DVD and DVDs are sent to other medical schools in Africa that do not have the infrastructure to directly connect. E-health Education: Students and academic staff were initially sent to the United States for training in medical informatics and workshops were held in South Africa for people from sub-Saharan Africa. This led to the development of postgraduate academic programs in medical informatics and telemedicine at UKZ-N. African students were then brought to UKZ-N for training. The model was changed from UKZ-N to students and staff based at their home universities with the aim of building capacity in the staff at partner institutions so that they can in time offer their own e-health academic programs. Conclusions: The need for capacity development in all aspects of e-health in sub-Saharan Africa is great and innovative solutions are required. PMID:22150714

  4. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  5. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  6. AN APPLIED ONTOLOGY TO THE MID-ATLANTIC CABLE: HISTORICAL TO MODERN INFORMATICS CONSIDERATION FROM A MATH PERSPECTIVE KAIEM L. FRINK ELIZABETH CITY STATE UNIVERSITY(ECSU)KAIEM_FRINK@HOTMAIL.COM, DR. DEWAYNE B. BRANCH ECSU, DR. ROB RASKIN JET PROPULSIONS LABORATORY GLENDA THOMAS ECSU,KENNETH JONES ECSU

    NASA Astrophysics Data System (ADS)

    Frink, K.; Branch, B. D.; Raskin, R.

    2009-12-01

    As early as the 1600's scientists in various fields world to address a global human need of human communication on a global basis by implementing the trans-Atlantic cable. The Mid 4Trans-Atlantic cable is one of the earliest forms of global commutation. Here may be the first evidence of informatics needs where science, data, and engineering were collaborated across disciplines to advance a world standard of living. This work investigates what applied ontology may have been consisting with the thought pattern of such expertise who conducted informatics arguably without computers, ontology’s, and a cyber infrastructure. In modern context, an applied ontology may best represent the body of intentional learning, research and collaboration among scientists to achieve a human goal. Perhaps if such intentional non-partisan work can achieve a solution such as Trans-Atlantic Cable, climate change may benefit from intentional collaborative ontology’s and systems of multi user knowledgebase or expert informatics systems. 1Bruce C. Heezen 1924 -1977 American Geologist famous for mapping the Mid Atlantic Mountain Ridge in the 1950’s. Heezen died in 1977 on a submarine cruise to study the Mid-Atlantic ridge near Ice land aboard the NR-1 submarine. 7Marie Tharp academic background is Bachelors Degree in English, Master Degree in Geology University of Michigan, and Mathematics Degree at the University of Tulsa. Tharp worked at Lamont- Doherty Earth Observatory at Columbia University. History of the Digital Divide during the 1600’s touches on the availability of information. 3Issue of Mathematics during the 1600’s would be lack of communications and assessment. The scientific communities cannot address climate change most largely due to language barriers amongst humans. Weight per meter for the cable and the ships weight capacity in the 1600’sWeight/per meter 2w/m=X1 taking into account that maximum depths or Atlantic Ocean was unknown at that time and still is.

  7. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control

    NASA Astrophysics Data System (ADS)

    He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.

  8. Health Professionals' Views of Informatics Education

    PubMed Central

    Staggers, Nancy; Gassert, Carole A.; Skiba, Diane J.

    2000-01-01

    Health care leaders emphasize the need to include information technology and informatics concepts in formal education programs, yet integration of informatics into health educational programs has progressed slowly. The AMIA 1999 Spring Congress was held to address informatics educational issues across health professions, including the educational needs in the various health professions, goals for health informatics education, and implementation strategies to achieve these goals. This paper presents the results from AMIA work groups focused on informatics education for non-informatics health professionals. In the categories of informatics needs, goals, and strategies, conference attendees suggested elements in these areas: educational responsibilities for faculty and students, organizational responsibilities, core computer skills and informatics knowledge, how to learn informatics skills, and resources required to implement educational strategies. PMID:11062228

  9. Evaluating satisfaction of patients with stutter regarding the tele-speech therapy method and infrastructure.

    PubMed

    Eslami Jahromi, Maryam; Ahmadian, Leila

    2018-07-01

    Investigating the required infrastructure for the implementation of telemedicine and the satisfaction of target groups improves the acceptance of this technology and facilitates the delivery of healthcare services. The aim of this study was to assess the satisfaction of patients with stutter concerning the therapeutic method and the infrastructure used to receive tele-speech therapy services. This descriptive-analytical study was conducted on all patients with stutter aged between 14 and 39 years at Jahrom Social Welfare Bureau (n = 30). The patients underwent speech therapy sessions through video conferencing with Skype. Data were collected by a researcher-made questionnaire. Its content validity was confirmed by three medical informatics specialists. Data were analyzed using SPSS version 19. The mean and standard deviation of patient satisfaction scores concerning the infrastructure and the tele-speech therapy method were 3.15 ± 0.52 and 3.49 ± 0.52, respectively. No significant relationship was found between the patients satisfaction and their gender, education level and age (p > 0.05). The results of this study showed that the number of speech therapy sessions did not affect the overall satisfaction of the patients (p > 0.05), but the number of therapeutic sessions had a direct relationship with their satisfaction with the infrastructure used for tele-speech therapy (p < 0.05). The present study showed that patients were satisfied with tele-speech therapy. According to most patients the low speed of the Internet connection in the country was a major challenge for receiving tele-speech therapy. The results suggest that healthcare planner and policy makers invest on increasing bandwidth to improve the success rate of telemedicine programs. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Integration of research infrastructures and ecosystem models toward development of predictive ecology

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Huang, Y.; Jiang, J.; MA, S.; Saruta, V.; Liang, G.; Hanson, P. J.; Ricciuto, D. M.; Milcu, A.; Roy, J.

    2017-12-01

    The past two decades have witnessed rapid development in sensor technology. Built upon the sensor development, large research infrastructure facilities, such as National Ecological Observatory Network (NEON) and FLUXNET, have been established. Through networking different kinds of sensors and other data collections at many locations all over the world, those facilities generate large volumes of ecological data every day. The big data from those facilities offer an unprecedented opportunity for advancing our understanding of ecological processes, educating teachers and students, supporting decision-making, and testing ecological theory. The big data from the major research infrastructure facilities also provides foundation for developing predictive ecology. Indeed, the capability to predict future changes in our living environment and natural resources is critical to decision making in a world where the past is no longer a clear guide to the future. We are living in a period marked by rapid climate change, profound alteration of biogeochemical cycles, unsustainable depletion of natural resources, and deterioration of air and water quality. Projecting changes in future ecosystem services to the society becomes essential not only for science but also for policy making. We will use this panel format to outline major opportunities and challenges in integrating research infrastructure and ecosystem models toward developing predictive ecology. Meanwhile, we will also show results from an interactive model-experiment System - Ecological Platform for Assimilating Data into models (EcoPAD) - that have been implemented at the Spruce and Peatland Responses Under Climatic and Environmental change (SPRUCE) experiment in Northern Minnesota and Montpellier Ecotron, France. EcoPAD is developed by integrating web technology, eco-informatics, data assimilation techniques, and ecosystem modeling. EcoPAD is designed to streamline data transfer seamlessly from research infrastructure facilities to model simulation, data assimilation, and ecological forecasting.

  11. On Informatics Diagnostics and Informatics Therapeutics - Good Medical Informatics Research Is Needed Here.

    PubMed

    Haux, Reinhold

    2017-01-01

    In the era of digitization some new procedures play an increasing role for diagnosis as well as for therapy: informatics diagnostics and informatics therapeutics. Challenges for such procedures are described. It is discussed, when research on such diagnostics and therapeutics can be regarded as good research. Examples are mentioned for informatics diagnostics and informatics therapeutics, which are based on health-enabling technologies.

  12. The Veterans Health Administration: quality, value, accountability, and information as transforming strategies for patient-centered care.

    PubMed

    Perlin, Johnathan B; Kolodner, Robert M; Roswell, Robert H

    2005-01-01

    The Veterans Health Administration is the United States' largest integrated health system. Once disparaged as a bureaucracy providing mediocre care, the Department of Veterans Affairs (VA) reinvented itself during the past decade through a policy shift mandating structural and organizational change, rationalization of resource allocation, explicit measurement and accountability for quality and value, and development of an information infrastructure supporting the needs of patients, clinicians, and administrators. Today, the VA is recognized for leadership in clinical informatics and performance improvement, cares for more patients with proportionally fewer resources, and sets national benchmarks in patient satisfaction and for 18 indicators of quality in disease prevention and treatment.

  13. The Veterans Health Administration: quality, value, accountability, and information as transforming strategies for patient-centered care.

    PubMed

    Perlin, Jonathan B; Kolodner, Robert M; Roswell, Robert H

    2004-11-01

    The Veterans Health Administration is the United States' largest integrated health system. Once disparaged as a bureaucracy providing mediocre care, the Department of Veterans Affairs (VA) reinvented itself during the past decade through a policy shift mandating structural and organizational change, rationalization of resource allocation, explicit measurement and accountability for quality and value, and development of an information infrastructure supporting the needs of patients, clinicians, and administrators. Today, the VA is recognized for leadership in clinical informatics and performance improvement, cares for more patients with proportionally fewer resources, and sets national benchmarks in patient satisfaction and for 18 indicators of quality in disease prevention and treatment.

  14. The radiology digital dashboard: effects on report turnaround time.

    PubMed

    Morgan, Matthew B; Branstetter, Barton F; Lionetti, David M; Richardson, Jeremy S; Chang, Paul J

    2008-03-01

    As radiology departments transition to near-complete digital information management, work flows and their supporting informatics infrastructure are becoming increasingly complex. Digital dashboards can integrate separate computerized information systems and summarize key work flow metrics in real time to facilitate informed decision making. A PACS-integrated digital dashboard function designed to alert radiologists to their unsigned report queue status, coupled with an actionable link to the report signing application, resulted in a 24% reduction in the time between transcription and report finalization. The dashboard was well received by radiologists who reported high usage for signing reports. Further research is needed to identify and evaluate other potentially useful work flow metrics for inclusion in a radiology clinical dashboard.

  15. Timely, Granular, and Actionable: Informatics in the Public Health 3.0 Era.

    PubMed

    Wang, Y Claire; DeSalvo, Karen

    2018-07-01

    Ensuring the conditions for all people to be healthy, though always the core mission of public health, has evolved in approaches in response to the changing epidemiology and challenges. In the Public Health 3.0 era, multisectorial efforts are essential in addressing not only infectious or noncommunicable diseases but also upstream social determinants of health. In this article, we argue that actionable, geographically granular, and timely intelligence is an essential infrastructure for the protection of our health today. Even though local and state efforts are key, there are substantial federal roles in accelerating data access, connecting existing data systems, providing guidance, incentivizing nonproprietary analytic tools, and coordinating measures that matter most.

  16. Geo-Engineering through Internet Informatics (GEMINI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watney, W. Lynn; Doveton, John H.; Victorine, John R.

    GEMINI will resolve reservoir parameters that control well performance; characterize subtle reservoir properties important in understanding and modeling hydrocarbon pore volume and fluid flow; expedite recognition of bypassed, subtle, and complex oil and gas reservoirs at regional and local scale; differentiate commingled reservoirs; build integrated geologic and engineering model based on real-time, iterate solutions to evaluate reservoir management options for improved recovery; provide practical tools to assist the geoscientist, engineer, and petroleum operator in making their tasks more efficient and effective; enable evaluations to be made at different scales, ranging from individual well, through lease, field, to play and regionmore » (scalable information infrastructure); and provide training and technology transfer to evaluate capabilities of the client.« less

  17. Implementing pharmacogenomics decision support across seven European countries: The Ubiquitous Pharmacogenomics (U-PGx) project.

    PubMed

    Blagec, Kathrin; Koopmann, Rudolf; Crommentuijn-van Rhenen, Mandy; Holsappel, Inge; van der Wouden, Cathelijne H; Konta, Lidija; Xu, Hong; Steinberger, Daniela; Just, Enrico; Swen, Jesse J; Guchelaar, Henk-Jan; Samwald, Matthias

    2018-02-09

    Clinical pharmacogenomics (PGx) has the potential to make pharmacotherapy safer and more effective by utilizing genetic patient data for drug dosing and selection. However, widespread adoption of PGx depends on its successful integration into routine clinical care through clinical decision support tools, which is often hampered by insufficient or fragmented infrastructures. This paper describes the setup and implementation of a unique multimodal, multilingual clinical decision support intervention consisting of digital, paper-, and mobile-based tools that are deployed across implementation sites in seven European countries participating in the Ubiquitous PGx (U-PGx) project. © The Author(s) 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  18. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  19. A game based virtual campus tour

    NASA Astrophysics Data System (ADS)

    Razia Sulthana, A.; Arokiaraj Jovith, A.; Saveetha, D.; Jaithunbi, A. K.

    2018-04-01

    The aim of the application is to create a virtual reality game, whose purpose is to showcase the facilities of SRM University, while doing so in an entertaining manner. The virtual prototype of the institution is deployed in a game engine which eases the students to look over the infrastructure, thereby reducing the resources utilization. Time and money are the resources in concern today. The virtual campus application assists the end user even from a remote location. The virtual world simulates the exact location and hence the effect is created. Thus, it virtually transports the user to the university, with the help of a VR Headset. This is a dynamic application wherein the user can move in any direction. The VR headset provides an interface to get gyro input and this is used to start and stop the movement. Virtual Campus is size efficient and occupies minimal space. It is scalable against mobile gadgets. This gaming application helps the end user to explore the campus, while having fun too. It is a user friendly application that supports users worldwide.

  20. A service for the application of data quality information to NASA earth science satellite records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Xing, Z.; Fry, C.; Khalsa, S. J. S.; Huang, T.; Chen, G.; Chin, T. M.; Alarcon, C.

    2016-12-01

    A recurring demand in working with satellite-based earth science data records is the need to apply data quality information. Such quality information is often contained within the data files as an array of "flags", but can also be represented by more complex quality descriptions such as combinations of bit flags, or even other ancillary variables that can be applied as thresholds to the geophysical variable of interest. For example, with Level 2 granules from the Group for High Resolution Sea Surface Temperature (GHRSST) project up to 6 independent variables could be used to screen the sea surface temperature measurements on a pixel-by-pixel basis. Quality screening of Level 3 data from the Soil Moisture Active Passive (SMAP) instrument can be become even more complex, involving 161 unique bit states or conditions a user can screen for. The application of quality information is often a laborious process for the user until they understand the implications of all the flags and bit conditions, and requires iterative approaches using custom software. The Virtual Quality Screening Service, a NASA ACCESS project, is addressing these issues and concerns. The project has developed an infrastructure to expose, apply, and extract quality screening information building off known and proven NASA components for data extraction and subset-by-value, data discovery, and exposure to the user of granule-based quality information. Further sharing of results through well-defined URLs and web service specifications has also been implemented. The presentation will focus on overall description of the technologies and informatics principals employed by the project. Examples of implementations of the end-to-end web service for quality screening with GHRSST and SMAP granules will be demonstrated.

  1. Tools and Methods for Teaching Informatics at School: An Advanced Logo Course.

    ERIC Educational Resources Information Center

    Nikolov, Rumen

    1992-01-01

    Describes a course in educational informatics for preservice teachers and students in educational software development that emphasizes the use of LOGO, and summarizes course modules that cover tools and methods for teaching informatics, informatics curriculum design, introducing the basic notions of informatics, integrating informatics into the…

  2. Proof of concept : examining characteristics of roadway infrastructure in various 3D visualization modes.

    DOT National Transportation Integrated Search

    2015-02-01

    Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at : demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, inter...

  3. Career Paths of Pathology Informatics Fellowship Alumni.

    PubMed

    Rudolf, Joseph W; Garcia, Christopher A; Hanna, Matthew G; Williams, Christopher L; Balis, Ulysses G; Pantanowitz, Liron; Tuthill, J Mark; Gilbertson, John R

    2018-01-01

    The alumni of today's Pathology Informatics and Clinical Informatics fellowships fill diverse roles in academia, large health systems, and industry. The evolving training tracks and curriculum of Pathology Informatics fellowships have been well documented. However, less attention has been given to the posttraining experiences of graduates from informatics training programs. Here, we examine the career paths of subspecialty fellowship-trained pathology informaticians. Alumni from four Pathology Informatics fellowship training programs were contacted for their voluntary participation in the study. We analyzed various components of training, and the subsequent career paths of Pathology Informatics fellowship alumni using data extracted from alumni provided curriculum vitae. Twenty-three out of twenty-seven alumni contacted contributed to the study. A majority had completed undergraduate study in science, technology, engineering, and math fields and combined track training in anatomic and clinical pathology. Approximately 30% (7/23) completed residency in a program with an in-house Pathology Informatics fellowship. Most completed additional fellowships (15/23) and many also completed advanced degrees (10/23). Common primary posttraining appointments included chief medical informatics officer (3/23), director of Pathology Informatics (10/23), informatics program director (2/23), and various roles in industry (3/23). Many alumni also provide clinical care in addition to their informatics roles (14/23). Pathology Informatics alumni serve on a variety of institutional committees, participate in national informatics organizations, contribute widely to scientific literature, and more than half (13/23) have obtained subspecialty certification in Clinical Informatics to date. Our analysis highlights several interesting phenomena related to the training and career trajectory of Pathology Informatics fellowship alumni. We note the long training track alumni complete in preparation for their careers. We believe flexible training pathways combining informatics and clinical training may help to alleviate the burden. We highlight the importance of in-house Pathology Informatics fellowships in promoting interest in informatics among residents. We also observe the many important leadership roles in academia, large community health systems, and industry available to early career alumni and believe this reflects a strong market for formally trained informaticians. We hope this analysis will be useful as we continue to develop the informatics fellowships to meet the future needs of our trainees and discipline.

  4. A medical informatics perspective on health informatics 3.0. Findings from the Yearbook 2011 section on health informatics 3.0.

    PubMed

    Ruch, P

    2011-01-01

    To summarize current advances of the so-called Web 3.0 and emerging trends of the semantic web. We provide a synopsis of the articles selected for the IMIA Yearbook 2011, from which we attempt to derive a synthetic overview of the today's and future activities in the field. while the state of the research in the field is illustrated by a set of fairly heterogeneous studies, it is possible to identify significant clusters. While the most salient challenge and obsessional target of the semantic web remains its ambition to simply interconnect all available information, it is interesting to observe the developments of complementary research fields such as information sciences and text analytics. The combined expression power and virtually unlimited data aggregation skills of Web 3.0 technologies make it a disruptive instrument to discover new biomedical knowledge. In parallel, such an unprecedented situation creates new threats for patients participating in large-scale genetic studies as Wjst demonstrate how various data set can be coupled to re-identify anonymous genetic information. The best paper selection of articles on decision support shows examples of excellent research on methods concerning original development of core semantic web techniques as well as transdisciplinary achievements as exemplified with literature-based analytics. This selected set of scientific investigations also demonstrates the needs for computerized applications to transform the biomedical data overflow into more operational clinical knowledge with potential threats for confidentiality directly associated with such advances. Altogether these papers support the idea that more elaborated computer tools, likely to combine heterogeneous text and data contents should soon emerge for the benefit of both experimentalists and hopefully clinicians.

  5. Red Hat Enterprise Virtualization - KVM-based infrastructure services at BNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortijo, D.

    2011-06-14

    Over the past 18 months, BNL has moved a large percentage of its Linux-based servers and services into a Red Hat Enterprise Virtualization (RHEV) environment. This presentation will address our approach to virtualization, critical decision points, and a discussion of our implementation. Specific topics will include an overview of hardware and software requirements, networking, and storage; discussion of the decision of Red Hat solution over competing products (VMWare, Xen, etc); details on some of the features of RHEV - both current and on their roadmap; Review of performance and reliability gains since deployment completion; path forward for RHEV at BNLmore » and caveats and potential problems.« less

  6. Pathology Informatics Essentials for Residents: A Flexible Informatics Curriculum Linked to Accreditation Council for Graduate Medical Education Milestones.

    PubMed

    Henricks, Walter H; Karcher, Donald S; Harrison, James H; Sinard, John H; Riben, Michael W; Boyer, Philip J; Plath, Sue; Thompson, Arlene; Pantanowitz, Liron

    2017-01-01

    -Recognition of the importance of informatics to the practice of pathology has surged. Training residents in pathology informatics has been a daunting task for most residency programs in the United States because faculty often lacks experience and training resources. Nevertheless, developing resident competence in informatics is essential for the future of pathology as a specialty. -To develop and deliver a pathology informatics curriculum and instructional framework that guides pathology residency programs in training residents in critical pathology informatics knowledge and skills, and meets Accreditation Council for Graduate Medical Education Informatics Milestones. -The College of American Pathologists, Association of Pathology Chairs, and Association for Pathology Informatics formed a partnership and expert work group to identify critical pathology informatics training outcomes and to create a highly adaptable curriculum and instructional approach, supported by a multiyear change management strategy. -Pathology Informatics Essentials for Residents (PIER) is a rigorous approach for educating all pathology residents in important pathology informatics knowledge and skills. PIER includes an instructional resource guide and toolkit for incorporating informatics training into residency programs that vary in needs, size, settings, and resources. PIER is available at http://www.apcprods.org/PIER (accessed April 6, 2016). -PIER is an important contribution to informatics training in pathology residency programs. PIER introduces pathology trainees to broadly useful informatics concepts and tools that are relevant to practice. PIER provides residency program directors with a means to implement a standardized informatics training curriculum, to adapt the approach to local program needs, and to evaluate resident performance and progress over time.

  7. WC WAVE - Integrating Diverse Hydrological-Modeling Data and Services Into an Interoperable Geospatial Infrastructure

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Baros, S.; Barrett, H.; Savickas, J.; Erickson, J.

    2015-12-01

    WC WAVE (Western Consortium for Watershed Analysis, Visualization and Exploration) is a collaborative research project between the states of Idaho, Nevada, and New Mexico that is funded under the National Science Foundation's Experimental Program to Stimulate Competitive Research (EPSCoR). The goal of the project is to understand and document the effects of climate change on interactions between precipitation, vegetation growth, soil moisture and other landscape properties. These interactions are modeled within a framework we refer to as a virtual watershed (VW), a computer infrastructure that simulates watershed dynamics by linking scientific modeling, visualization, and data management components into a coherent whole. Developed and hosted at the Earth Data Analysis Center, University of New Mexico, the virtual watershed has a number of core functions which include: a) streamlined access to data required for model initialization and boundary conditions; b) the development of analytic scenarios through interactive visualization of available data and the storage of model configuration options; c) coupling of hydrological models through the rapid assimilation of model outputs into the data management system for access and use by sequent models. The WC-WAVE virtual watershed accomplishes these functions by provision of large-scale vector and raster data discovery, subsetting, and delivery via Open Geospatial Consortium (OGC) and REST web service standards. Central to the virtual watershed is the design and use of an innovative array of metadata elements that permits the stepwise coupling of diverse hydrological models (e.g. ISNOBAL, PRMS, CASiMiR) and input data to rapidly assess variation in outcomes under different climatic conditions. We present details on the architecture and functionality of the virtual watershed, results from three western U.S. watersheds, and discuss the realized benefits to watershed science of employing this integrated solution.

  8. Moving toward a United States strategic plan in primary care informatics: a White Paper of the Primary Care Informatics Working Group, American Medical Informatics Association.

    PubMed

    Little, David R; Zapp, John A; Mullins, Henry C; Zuckerman, Alan E; Teasdale, Sheila; Johnson, Kevin B

    2003-01-01

    The Primary Care Informatics Working Group (PCIWG) of the American Medical Informatics Association (AMIA) has identified the absence of a national strategy for primary care informatics. Under PCIWG leadership, major national and international societies have come together to create the National Alliance for Primary Care Informatics (NAPCI), to promote a connection between the informatics community and the organisations that support primary care. The PCIWG clinical practice subcommittee has recognised the necessity of a global needs assessment, and proposed work in point-of-care technology, clinical vocabularies, and ambulatory electronic medical record development. Educational needs include a consensus statement on informatics competencies, recommendations for curriculum and teaching methods, and methodologies to evaluate their effectiveness. The research subcommittee seeks to define a primary care informatics research agenda, and to support and disseminate informatics research throughout the primary care community. The AMIA board of directors has enthusiastically endorsed the conceptual basis for this White Paper.

  9. An Approach for All in Pharmacy Informatics Education.

    PubMed

    Fox, Brent I; Flynn, Allen; Clauson, Kevin A; Seaton, Terry L; Breeden, Elizabeth

    2017-03-25

    Computerization is transforming health care. All clinicians are users of health information technology (HIT). Understanding fundamental principles of informatics, the field focused on information needs and uses, is essential if HIT is going to support improved patient outcomes. Informatics education for clinicians is a national priority. Additionally, some informatics experts are needed to bring about innovations in HIT. A common approach to pharmacy informatics education has been slow to develop. Meanwhile, accreditation standards for informatics in pharmacy education continue to evolve. A gap remains in the implementation of informatics education for all pharmacy students and it is unclear what expert informatics training should cover. In this article, we propose the first of two complementary approaches to informatics education in pharmacy: to incorporate fundamental informatics education into pharmacy curricula for all students. The second approach, to train those students interested in becoming informatics experts to design, develop, implement, and evaluate HIT, will be presented in a subsequent issue of the Journal .

  10. An Approach for All in Pharmacy Informatics Education

    PubMed Central

    Flynn, Allen; Clauson, Kevin A.; Seaton, Terry L.; Breeden, Elizabeth

    2017-01-01

    Computerization is transforming health care. All clinicians are users of health information technology (HIT). Understanding fundamental principles of informatics, the field focused on information needs and uses, is essential if HIT is going to support improved patient outcomes. Informatics education for clinicians is a national priority. Additionally, some informatics experts are needed to bring about innovations in HIT. A common approach to pharmacy informatics education has been slow to develop. Meanwhile, accreditation standards for informatics in pharmacy education continue to evolve. A gap remains in the implementation of informatics education for all pharmacy students and it is unclear what expert informatics training should cover. In this article, we propose the first of two complementary approaches to informatics education in pharmacy: to incorporate fundamental informatics education into pharmacy curricula for all students. The second approach, to train those students interested in becoming informatics experts to design, develop, implement, and evaluate HIT, will be presented in a subsequent issue of the Journal. PMID:28381898

  11. CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions

    NASA Astrophysics Data System (ADS)

    Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.

    Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.

  12. Examining Tensions That Affect the Evaluation of Technology in Health Care: Considerations for System Decision Makers From the Perspective of Industry and Evaluators.

    PubMed

    Desveaux, Laura; Shaw, James; Wallace, Ross; Bhattacharyya, Onil; Bhatia, R Sacha; Jamieson, Trevor

    2017-12-08

    Virtual technologies have the potential to mitigate a range of challenges for health care systems. Despite the widespread use of mobile devices in everyday life, they currently have a limited role in health service delivery and clinical care. Efforts to integrate the fast-paced consumer technology market with health care delivery exposes tensions among patients, providers, vendors, evaluators, and system decision makers. This paper explores the key tensions between the high bar for evidence prior to market approval that guides health care regulatory decisions and the "fail fast" reality of the technology industry. We examine three core tensions: balancing user needs versus system needs, rigor versus responsiveness, and the role of pre- versus postmarket evidence generation. We use these to elaborate on the structure and appropriateness of evaluation mechanisms for virtual care solutions. Virtual technologies provide a foundation for personalized, patient-centered medicine on the user side, coupled with a broader understanding of impact on the system side. However, mechanisms for stakeholder discussion are needed to clarify the nature of the health technology marketplace and the drivers of evaluation priorities. ©Laura Desveaux, James Shaw, Ross Wallace, Onil Bhattacharyya, R Sacha Bhatia, Trevor Jamieson. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 08.12.2017.

  13. The Italian Cloud-based brokering Infrastructure to sustain Interoperability for Operative Hydrology

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Pecora, S.; Bussettini, M.; Bordini, F.; Nativi, S.

    2015-12-01

    This work presents the informatics platform carried out to implement the National Hydrological Operative Information System of Italy. In particular, the presentation will focus on the governing aspects of the cloud infrastructure and brokering software that make possible to sustain the hydrology data flow between heterogeneous user clients and data providers.The Institute for Environmental Protection and Research, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale) in collaboration with the Regional Agency for Environmental Protection in the Emilia-Romagna region, ARPA-ER (Agenzia Regionale per la Prevenzione e l´Ambiente dell´Emilia-Romagna) and CNR-IIA (National Research Council of Italy) designed and developed an innovative platform for the discovery and access of hydrological data coming from 19 Italian administrative regions and 2 Italian autonomous provinces, in near real time. ISPRA has deployed and governs such a system. The presentation will introduce and discuss the technological barriers for interoperability as well as social and policy ones. The adopted solutions will be described outlining the sustainability challenges and benefits.

  14. Community-driven computational biology with Debian Linux

    PubMed Central

    2010-01-01

    Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984

  15. PESI - a taxonomic backbone for Europe.

    PubMed

    de Jong, Yde; Kouwenberg, Juliana; Boumans, Louis; Hussey, Charles; Hyam, Roger; Nicolson, Nicola; Kirk, Paul; Paton, Alan; Michel, Ellinor; Guiry, Michael D; Boegh, Phillip S; Pedersen, Henrik Ærenlund; Enghoff, Henrik; von Raab-Straube, Eckhard; Güntsch, Anton; Geoffroy, Marc; Müller, Andreas; Kohlbecker, Andreas; Berendsohn, Walter; Appeltans, Ward; Arvanitidis, Christos; Vanhoorne, Bart; Declerck, Joram; Vandepitte, Leen; Hernandez, Francisco; Nash, Róisín; Costello, Mark John; Ouvrard, David; Bezard-Falgas, Pascale; Bourgoin, Thierry; Wetzel, Florian Tobias; Glöckler, Falko; Korb, Günther; Ring, Caroline; Hagedorn, Gregor; Häuser, Christoph; Aktaç, Nihat; Asan, Ahmet; Ardelean, Adorian; Borges, Paulo Alexandre Vieira; Dhora, Dhimiter; Khachatryan, Hasmik; Malicky, Michael; Ibrahimov, Shaig; Tuzikov, Alexander; De Wever, Aaike; Moncheva, Snejana; Spassov, Nikolai; Chobot, Karel; Popov, Alexi; Boršić, Igor; Sfenthourakis, Spyros; Kõljalg, Urmas; Uotila, Pertti; Olivier, Gargominy; Dauvin, Jean-Claude; Tarkhnishvili, David; Chaladze, Giorgi; Tuerkay, Michael; Legakis, Anastasios; Peregovits, László; Gudmundsson, Gudmundur; Ólafsson, Erling; Lysaght, Liam; Galil, Bella Sarah; Raimondo, Francesco M; Domina, Gianniantonio; Stoch, Fabio; Minelli, Alessandro; Spungis, Voldermars; Budrys, Eduardas; Olenin, Sergej; Turpel, Armand; Walisch, Tania; Krpach, Vladimir; Gambin, Marie Therese; Ungureanu, Laurentia; Karaman, Gordan; Kleukers, Roy M J C; Stur, Elisabeth; Aagaard, Kaare; Valland, Nils; Moen, Toril Loennechen; Bogdanowicz, Wieslaw; Tykarski, Piotr; Węsławski, Jan Marcin; Kędra, Monika; M de Frias Martins, Antonio; Abreu, António Domingos; Silva, Ricardo; Medvedev, Sergei; Ryss, Alexander; Šimić, Smiljka; Marhold, Karol; Stloukal, Eduard; Tome, Davorin; Ramos, Marian A; Valdés, Benito; Pina, Francisco; Kullander, Sven; Telenius, Anders; Gonseth, Yves; Tschudin, Pascal; Sergeyeva, Oleksandra; Vladymyrov, Volodymyr; Rizun, Volodymyr Bohdanovych; Raper, Chris; Lear, Dan; Stoev, Pavel; Penev, Lyubomir; Rubio, Ana Casino; Backeljau, Thierry; Saarenmaa, Hannu; Ulenberg, Sandrine

    2015-01-01

    Reliable taxonomy underpins communication in all of biology, not least nature conservation and sustainable use of ecosystem resources. The flexibility of taxonomic interpretations, however, presents a serious challenge for end-users of taxonomic concepts. Users need standardised and continuously harmonised taxonomic reference systems, as well as high-quality and complete taxonomic data sets, but these are generally lacking for non-specialists. The solution is in dynamic, expertly curated web-based taxonomic tools. The Pan-European Species-directories Infrastructure (PESI) worked to solve this key issue by providing a taxonomic e-infrastructure for Europe. It strengthened the relevant social (expertise) and information (standards, data and technical) capacities of five major community networks on taxonomic indexing in Europe, which is essential for proper biodiversity assessment and monitoring activities. The key objectives of PESI were: 1) standardisation in taxonomic reference systems, 2) enhancement of the quality and completeness of taxonomic data sets and 3) creation of integrated access to taxonomic information. This paper describes the results of PESI and its future prospects, including the involvement in major European biodiversity informatics initiatives and programs.

  16. PESI - a taxonomic backbone for Europe

    PubMed Central

    Kouwenberg, Juliana; Boumans, Louis; Hussey, Charles; Hyam, Roger; Nicolson, Nicola; Kirk, Paul; Paton, Alan; Michel, Ellinor; Guiry, Michael D.; Boegh, Phillip S.; Pedersen, Henrik Ærenlund; Enghoff, Henrik; von Raab-Straube, Eckhard; Güntsch, Anton; Geoffroy, Marc; Müller, Andreas; Kohlbecker, Andreas; Berendsohn, Walter; Appeltans, Ward; Arvanitidis, Christos; Vanhoorne, Bart; Declerck, Joram; Vandepitte, Leen; Hernandez, Francisco; Nash, Róisín; Costello, Mark John; Ouvrard, David; Bezard-Falgas, Pascale; Bourgoin, Thierry; Wetzel, Florian Tobias; Glöckler, Falko; Korb, Günther; Ring, Caroline; Hagedorn, Gregor; Häuser, Christoph; Aktaç, Nihat; Asan, Ahmet; Ardelean, Adorian; Borges, Paulo Alexandre Vieira; Dhora, Dhimiter; Khachatryan, Hasmik; Malicky, Michael; Ibrahimov, Shaig; Tuzikov, Alexander; De Wever, Aaike; Moncheva, Snejana; Spassov, Nikolai; Chobot, Karel; Popov, Alexi; Boršić, Igor; Sfenthourakis, Spyros; Kõljalg, Urmas; Uotila, Pertti; Olivier, Gargominy; Dauvin, Jean-Claude; Tarkhnishvili, David; Chaladze, Giorgi; Tuerkay, Michael; Legakis, Anastasios; Peregovits, László; Gudmundsson, Gudmundur; Ólafsson, Erling; Lysaght, Liam; Galil, Bella Sarah; Raimondo, Francesco M.; Domina, Gianniantonio; Stoch, Fabio; Minelli, Alessandro; Spungis, Voldermars; Budrys, Eduardas; Olenin, Sergej; Turpel, Armand; Walisch, Tania; Krpach, Vladimir; Gambin, Marie Therese; Ungureanu, Laurentia; Karaman, Gordan; Kleukers, Roy M.J.C.; Stur, Elisabeth; Aagaard, Kaare; Valland, Nils; Moen, Toril Loennechen; Bogdanowicz, Wieslaw; Tykarski, Piotr; Węsławski, Jan Marcin; Kędra, Monika; M. de Frias Martins, Antonio; Abreu, António Domingos; Silva, Ricardo; Medvedev, Sergei; Ryss, Alexander; Šimić, Smiljka; Marhold, Karol; Stloukal, Eduard; Tome, Davorin; Ramos, Marian A.; Valdés, Benito; Pina, Francisco; Kullander, Sven; Telenius, Anders; Gonseth, Yves; Tschudin, Pascal; Sergeyeva, Oleksandra; Vladymyrov, Volodymyr; Rizun, Volodymyr Bohdanovych; Raper, Chris; Lear, Dan; Stoev, Pavel; Penev, Lyubomir; Rubio, Ana Casino; Backeljau, Thierry; Saarenmaa, Hannu; Ulenberg, Sandrine

    2015-01-01

    Abstract Background Reliable taxonomy underpins communication in all of biology, not least nature conservation and sustainable use of ecosystem resources. The flexibility of taxonomic interpretations, however, presents a serious challenge for end-users of taxonomic concepts. Users need standardised and continuously harmonised taxonomic reference systems, as well as high-quality and complete taxonomic data sets, but these are generally lacking for non-specialists. The solution is in dynamic, expertly curated web-based taxonomic tools. The Pan-European Species-directories Infrastructure (PESI) worked to solve this key issue by providing a taxonomic e-infrastructure for Europe. It strengthened the relevant social (expertise) and information (standards, data and technical) capacities of five major community networks on taxonomic indexing in Europe, which is essential for proper biodiversity assessment and monitoring activities. The key objectives of PESI were: 1) standardisation in taxonomic reference systems, 2) enhancement of the quality and completeness of taxonomic data sets and 3) creation of integrated access to taxonomic information. New information This paper describes the results of PESI and its future prospects, including the involvement in major European biodiversity informatics initiatives and programs. PMID:26491393

  17. The Function Biomedical Informatics Research Network Data Repository.

    PubMed

    Keator, David B; van Erp, Theo G M; Turner, Jessica A; Glover, Gary H; Mueller, Bryon A; Liu, Thomas T; Voyvodic, James T; Rasmussen, Jerod; Calhoun, Vince D; Lee, Hyo Jong; Toga, Arthur W; McEwen, Sarah; Ford, Judith M; Mathalon, Daniel H; Diaz, Michele; O'Leary, Daniel S; Jeremy Bockholt, H; Gadde, Syam; Preda, Adrian; Wible, Cynthia G; Stern, Hal S; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G

    2016-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical data sets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 data set consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 T scanners. The FBIRN Phase 2 and Phase 3 data sets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN's multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. From information technology to informatics: the information revolution in dental education.

    PubMed

    Schleyer, Titus K; Thyvalikakath, Thankam P; Spallek, Heiko; Dziabiak, Michael P; Johnson, Lynn A

    2012-01-01

    The capabilities of information technology (IT) have advanced precipitously in the last fifty years. Many of these advances have enabled new and beneficial applications of IT in dental education. However, conceptually, IT use in dental schools is only in its infancy. Challenges and opportunities abound for improving how we support clinical care, education, and research with IT. In clinical care, we need to move electronic dental records beyond replicating paper, connect information on oral health to that on systemic health, facilitate collaborative care through teledentistry, and help clinicians apply evidence-based dentistry and preventive management strategies. With respect to education, we should adopt an evidence-based approach to IT use for teaching and learning, share effective educational content and methods, leverage technology-mediated changes in the balance of power between faculty and students, improve technology support for clinical teaching, and build an information infrastructure centered on learners and organizations. In research, opportunities include reusing clinical care data for research studies, helping advance computational methods for research, applying generalizable research tools in dentistry, and reusing research data and scientific workflows. In the process, we transition from a focus on IT-the mere technical aspects of applying computer technology-to one on informatics: the what, how, and why of managing information.

  19. From Information Technology to Informatics: The Information Revolution in Dental Education

    PubMed Central

    Schleyer, Titus K.; Thyvalikakath, Thankam P.; Spallek, Heiko; Dziabiak, Michael P.; Johnson, Lynn A.

    2014-01-01

    The capabilities of information technology (IT) have advanced precipitously in the last fifty years. Many of these advances have enabled new and beneficial applications of IT in dental education. However, conceptually, IT use in dental schools is only in its infancy. Challenges and opportunities abound for improving how we support clinical care, education, and research with IT. In clinical care, we need to move electronic dental records beyond replicating paper, connect information on oral health to that on systemic health, facilitate collaborative care through teledentistry, and help clinicians apply evidence-based dentistry and preventive management strategies. With respect to education, we should adopt an evidence-based approach to IT use for teaching and learning, share effective educational content and methods, leverage technology-mediated changes in the balance of power between faculty and students, improve technology support for clinical teaching, and build an information infrastructure centered on learners and organizations. In research, opportunities include reusing clinical care data for research studies, helping advance computational methods for research, applying generalizable research tools in dentistry, and reusing research data and scientific workflows. In the process, we transition from a focus on IT—the mere technical aspects of applying computer technology—to one on informatics: the what, how, and why of managing information. PMID:22262557

  20. Pathology Informatics Essentials for Residents: A flexible informatics curriculum linked to Accreditation Council for Graduate Medical Education milestones

    PubMed Central

    Henricks, Walter H; Karcher, Donald S; Harrison, James H; Sinard, John H; Riben, Michael W; Boyer, Philip J; Plath, Sue; Thompson, Arlene; Pantanowitz, Liron

    2016-01-01

    Context: Recognition of the importance of informatics to the practice of pathology has surged. Training residents in pathology informatics have been a daunting task for most residency programs in the United States because faculty often lacks experience and training resources. Nevertheless, developing resident competence in informatics is essential for the future of pathology as a specialty. Objective: The objective of the study is to develop and deliver a pathology informatics curriculum and instructional framework that guides pathology residency programs in training residents in critical pathology informatics knowledge and skills and meets Accreditation Council for Graduate Medical Education Informatics Milestones. Design: The College of American Pathologists, Association of Pathology Chairs, and Association for Pathology Informatics formed a partnership and expert work group to identify critical pathology informatics training outcomes and to create a highly adaptable curriculum and instructional approach, supported by a multiyear change management strategy. Results: Pathology Informatics Essentials for Residents (PIER) is a rigorous approach for educating all pathology residents in important pathology informatics knowledge and skills. PIER includes an instructional resource guide and toolkit for incorporating informatics training into residency programs that vary in needs, size, settings, and resources. PIER is available at http://www.apcprods.org/PIER (accessed April 6, 2016). Conclusions: PIER is an important contribution to informatics training in pathology residency programs. PIER introduces pathology trainees to broadly useful informatics concepts and tools that are relevant to practice. PIER provides residency program directors with a means to implement a standardized informatics training curriculum, to adapt the approach to local program needs, and to evaluate resident performance and progress over time. PMID:27563486

  1. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  2. Roadmap for Developing of Brokering as a Component of EarthCube

    NASA Astrophysics Data System (ADS)

    Pearlman, J.; Khalsa, S. S.; Browdy, S.; Duerr, R. E.; Nativi, S.; Parsons, M. A.; Pearlman, F.; Robinson, E. M.

    2012-12-01

    The goal of NSF's EarthCube is to create a sustainable infrastructure that enables the sharing of all geosciences data, information, and knowledge in an open, transparent and inclusive manner. Key to achieving the EarthCube vision is establishing a process that will guide the evolution of the infrastructure through community engagement and appropriate investment so that the infrastructure is embraced and utilized by the entire geosciences community. In this presentation we describe a roadmap, developed through the EarthCube Brokering Concept Award, for an evolutionary process of infrastructure and interoperability development. All geoscience communities already have, to a greater or lesser degree, elements of an information infrastructure in place. These elements include resources such as data archives, catalogs, and portals as well as vocabularies, data models, protocols, best practices and other community conventions. What is necessary now is a process for consolidating these diverse infrastructure elements into an overall infrastructure that provides easy discovery, access and utilization of resources across disciplinary boundaries. This process of consolidation will be achieved by creating "interfaces," what we call "brokers," between systems. Brokers connect disparate systems without imposing new burdens upon those systems, and enable the infrastructure to adjust to new technical developments and scientific requirements as they emerge. Robust cyberinfrastructure will arise only when social, organizational, and cultural issues are resolved in tandem with the creation of technology-based services. This is best done through use-case-driven requirements and agile, iterative development methods. It is important to start by solving real (not hypothetical) information access and use problems via small pilot projects that develop capabilities targeted to specific communities. These pilots can then grow into larger prototypes addressing intercommunity problems working towards a full-scale socio-technical infrastructure vision. Brokering, as a critical capability for connecting systems, evolves over time through more connections and increased functionality. This adaptive process allows for continual evaluation as to how well science-driven use cases are being met. Several NSF infrastructure projects are underway and beginning to shape the next generation of information sharing. There is a near term, and possibly unique, opportunity to increase the impact and interconnectivity of these projects, and further improve science research collaboration through brokering. Brokering has been demonstrated to be an essential part of a robust, adaptive infrastructure, but critical questions of governance and detailed implementation remain. Our roadmap proposes the expansion of brokering pilots into fully operational prototypes that work with the broader science and informatics communities to answer these questions, connect existing and emerging systems, and evolve the EarthCube infrastructure.

  3. Current Status of Nursing Informatics Education in Korea.

    PubMed

    Jeon, Eunjoo; Kim, Jeongeun; Park, Hyeoun-Ae; Lee, Ji-Hyun; Kim, Jungha; Jin, Meiling; Ahn, Shinae; Jun, Jooyeon; Song, Healim; On, Jeongah; Jung, Hyesil; Hong, Yeong Joo; Yim, Suran

    2016-04-01

    This study presents the current status of nursing informatics education, the content covered in nursing informatics courses, the faculty efficacy, and the barriers to and additional supports for teaching nursing informatics in Korea. A set of questionnaires consisting of an 18-item questionnaire for nursing informatics education, a 6-item questionnaire for faculty efficacy, and 2 open-ended questions for barriers and additional supports were sent to 204 nursing schools via email and the postal service. Nursing schools offering nursing informatics were further asked to send their syllabuses. The subjects taught were analyzed using nursing informatics competency categories and other responses were tailed using descriptive statistics. A total of 72 schools (35.3%) responded to the survey, of which 38 reported that they offered nursing informatics courses in their undergraduate nursing programs. Nursing informatics courses at 11 schools were taught by a professor with a degree majoring in nursing informatics. Computer technology was the most frequently taught subject (27 schools), followed by information systems used for practice (25 schools). The faculty efficacy was 3.76 ± 0.86 (out of 5). The most frequently reported barrier to teaching nursing informatics (n = 9) was lack of awareness of the importance of nursing informatics. Training and educational opportunities was the most requested additional support. Nursing informatics education has increased during the last decade in Korea. However, the proportions of faculty with degrees in nursing informatics and number of schools offering nursing informatics courses have not increased much. Thus, a greater focus is needed on training faculty and developing the courses.

  4. Current Status of Nursing Informatics Education in Korea

    PubMed Central

    Jeon, Eunjoo; Kim, Jeongeun; Lee, Ji-Hyun; Kim, Jungha; Jin, Meiling; Ahn, Shinae; Jun, Jooyeon; Song, Healim; On, Jeongah; Jung, Hyesil; Hong, Yeong Joo; Yim, Suran

    2016-01-01

    Objectives This study presents the current status of nursing informatics education, the content covered in nursing informatics courses, the faculty efficacy, and the barriers to and additional supports for teaching nursing informatics in Korea. Methods A set of questionnaires consisting of an 18-item questionnaire for nursing informatics education, a 6-item questionnaire for faculty efficacy, and 2 open-ended questions for barriers and additional supports were sent to 204 nursing schools via email and the postal service. Nursing schools offering nursing informatics were further asked to send their syllabuses. The subjects taught were analyzed using nursing informatics competency categories and other responses were tailed using descriptive statistics. Results A total of 72 schools (35.3%) responded to the survey, of which 38 reported that they offered nursing informatics courses in their undergraduate nursing programs. Nursing informatics courses at 11 schools were taught by a professor with a degree majoring in nursing informatics. Computer technology was the most frequently taught subject (27 schools), followed by information systems used for practice (25 schools). The faculty efficacy was 3.76 ± 0.86 (out of 5). The most frequently reported barrier to teaching nursing informatics (n = 9) was lack of awareness of the importance of nursing informatics. Training and educational opportunities was the most requested additional support. Conclusions Nursing informatics education has increased during the last decade in Korea. However, the proportions of faculty with degrees in nursing informatics and number of schools offering nursing informatics courses have not increased much. Thus, a greater focus is needed on training faculty and developing the courses. PMID:27200224

  5. Leveraging health information technology to achieve the "triple aim" of healthcare reform.

    PubMed

    Sheikh, Aziz; Sood, Harpreet S; Bates, David W

    2015-07-01

    To investigate experiences with leveraging health information technology (HIT) to improve patient care and population health, and reduce healthcare expenditures. In-depth qualitative interviews with federal government employees, health policy, HIT and medico-legal experts, health providers, physicians, purchasers, payers, patient advocates, and vendors from across the United States. The authors undertook 47 interviews. There was a widely shared belief that Health Information Technology for Economic and Clinical Health (HITECH) had catalyzed the creation of a digital infrastructure, which was being used in innovative ways to improve quality of care and curtail costs. There were however major concerns about the poor usability of electronic health records (EHRs), their limited ability to support multi-disciplinary care, and major difficulties with health information exchange, which undermined efforts to deliver integrated patient-centered care. Proposed strategies for enhancing the benefits of HIT included federal stimulation of competition by mandating vendors to open-up their application program interfaces, incenting development of low-cost consumer informatics tools, and promoting Congressional review of the The Health Insurance Portability and Accountability Act (HIPPA) to optimize the balance between data privacy and reuse. Many underscored the need to "kick the legs from underneath the fee-for-service model" and replace it with a data-driven reimbursement system that rewards high quality care. The HITECH Act has stimulated unprecedented, multi-stakeholder interest in HIT. Early experiences indicate that the resulting digital infrastructure is being used to improve quality of care and curtail costs. Reform efforts are however severely limited by problems with usability, limited interoperability and the persistence of the fee-for-service paradigm-addressing these issues therefore needs to be the federal government's main policy target. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. So ware-Defined Network Solutions for Science Scenarios: Performance Testing Framework and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Settlemyer, Bradley; Kettimuthu, R.; Boley, Josh

    High-performance scientific work flows utilize supercomputers, scientific instruments, and large storage systems. Their executions require fast setup of a small number of dedicated network connections across the geographically distributed facility sites. We present Software-Defined Network (SDN) solutions consisting of site daemons that use dpctl, Floodlight, ONOS, or OpenDaylight controllers to set up these connections. The development of these SDN solutions could be quite disruptive to the infrastructure, while requiring a close coordination among multiple sites; in addition, the large number of possible controller and device combinations to investigate could make the infrastructure unavailable to regular users for extended periods ofmore » time. In response, we develop a Virtual Science Network Environment (VSNE) using virtual machines, Mininet, and custom scripts that support the development, testing, and evaluation of SDN solutions, without the constraints and expenses of multi-site physical infrastructures; furthermore, the chosen solutions can be directly transferred to production deployments. By complementing VSNE with a physical testbed, we conduct targeted performance tests of various SDN solutions to help choose the best candidates. In addition, we propose a switching response method to assess the setup times and throughput performances of different SDN solutions, and present experimental results that show their advantages and limitations.« less

  7. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital.

    PubMed

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb; Hwang, Hee

    2012-12-01

    The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation.

  8. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital

    PubMed Central

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb

    2012-01-01

    Objectives The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Methods Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Results Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Conclusions Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation. PMID:23346476

  9. Linking multiple biodiversity informatics platforms with Darwin Core Archives

    PubMed Central

    2014-01-01

    Abstract We describe an implementation of the Darwin Core Archive (DwC-A) standard that allows for the exchange of biodiversity information contained within the Scratchpads virtual research environment with external collaborators. Using this single archive file Scratchpad users can expose taxonomies, specimen records, species descriptions and a range of other data to a variety of third-party aggregators and tools (currently Encyclopedia of Life, eMonocot Portal, CartoDB, and the Common Data Model) for secondary use. This paper describes our technical approach to dynamically building and validating Darwin Core Archives for the 600+ Scratchpad user communities, which can be used to serve the diverse data needs of all of our content partners. PMID:24723785

  10. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  11. Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment

    PubMed Central

    Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel

    2008-01-01

    Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979

  12. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  13. Health system informatics.

    PubMed

    Felkey, B G

    1997-02-01

    The application of informatics in a health system in general and to pharmacy in particular is discussed. Informatics is the use of information technology to enhance the quality of care, facilitate accountability, and assist in cost containment. Tying the pieces of health care into a seamless system using informatics principles yields a more rational approach to caregiving. A four-layer hierarchy of information systems can be found in any health system: layer 1, the foundational layer formed by a transaction-processing system; 2, the management information system; 3, decision support; and 4, advanced informatics applications such as expert systems. Other industries appear to be ahead of health care in investing in informatics applications. Pharmacy is one of the key health care professions that must adopt informatics. A stepwise structure for pharmacy informatics has been proposed; it consists of establishing a relationship with the patient, establishing a database, listing and ranking problems, choosing among alternatives, and planning and monitoring. Informatics should be approached by determining where the department is going strategically. Informatics standards will be needed. Pharmacists will need to use informatics to enhance their worth on the health care team and to improve patient care.

  14. Engaging adolescents in a computer-based weight management program: avatars and virtual coaches could help.

    PubMed

    LeRouge, Cynthia; Dickhut, Kathryn; Lisetti, Christine; Sangameswaran, Savitha; Malasanos, Toree

    2016-01-01

    This research focuses on the potential ability of animated avatars (a digital representation of the user) and virtual agents (a digital representation of a coach, buddy, or teacher) to deliver computer-based interventions for adolescents' chronic weight management. An exploration of the acceptance and desire of teens to interact with avatars and virtual agents for self-management and behavioral modification was undertaken. The utilized approach was inspired by community-based participatory research. Data was collected from 2 phases: Phase 1) focus groups with teens, provider interviews, parent interviews; and Phase 2) mid-range prototype assessment by teens and providers. Data from all stakeholder groups expressed great interest in avatars and virtual agents assisting self-management efforts. Adolescents felt the avatars and virtual agents could: 1) reinforce guidance and support, 2) fit within their lifestyle, and 3) help set future goals, particularly after witnessing the effect of their current behavior(s) on the projected physical appearance (external and internal organs) of avatars. Teens wanted 2 virtual characters: a virtual agent to act as a coach or teacher and an avatar (extension of themselves) to serve as a "buddy" for empathic support and guidance and as a surrogate for rewards. Preferred modalities for use include both mobile devices to accommodate access and desktop to accommodate preferences for maximum screen real estate to support virtualization of functions that are more contemplative and complex (e.g., goal setting). Adolescents expressed a desire for limited co-user access, which they could regulate. Data revealed certain barriers and facilitators that could affect adoption and use. The current study extends the support of teens, parents, and providers for adding avatars or virtual agents to traditional computer-based interactions. Data supports the desire for a personal relationship with a virtual character in support of previous studies. The study provides a foundation for further work in the area of avatar-driven motivational interviewing. This study provides evidence supporting the use of avatars and virtual agents, designed using participatory approaches, to be included in the continuum of care. Increased probability of engagement and long-term retention of overweight, obese adolescent users and suggests expanding current chronic care models toward more comprehensive, socio-technical representations. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. First field trial of Virtual Network Operator oriented network on demand (NoD) service provisioning over software defined multi-vendor OTN networks

    NASA Astrophysics Data System (ADS)

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Chen, Haoran; Zhu, Ruijie; Zhou, Quanwei; Yu, Chenbei; Cui, Rui

    2017-01-01

    A Virtual Network Operator (VNO) is a provider and reseller of network services from other telecommunications suppliers. These network providers are categorized as virtual because they do not own the underlying telecommunication infrastructure. In terms of business operation, VNO can provide customers with personalized services by leasing network infrastructure from traditional network providers. The unique business modes of VNO lead to the emergence of network on demand (NoD) services. The conventional network provisioning involves a series of manual operation and configuration, which leads to high cost in time. Considering the advantages of Software Defined Networking (SDN), this paper proposes a novel NoD service provisioning solution to satisfy the private network need of VNOs. The solution is first verified in the real software defined multi-domain optical networks with multi-vendor OTN equipment. With the proposed solution, NoD service can be deployed via online web portals in near-real time. It reinvents the customer experience and redefines how network services are delivered to customers via an online self-service portal. Ultimately, this means a customer will be able to simply go online, click a few buttons and have new services almost instantaneously.

  16. The internal challenges of medical informatics.

    PubMed

    Gell, G

    1997-03-01

    Haux's [7] basic assumption that the object of medical informatics is: "... to assure and to improve the quality of healthcare as well as the quality of research and education in medicine and in the health sciences ..." is taken as a starting point to discuss the three main topics: What is the meaning of medical informatics (i.e. what should be the main activities of medical informatics to bring maximum benefit to medicine)? What are the achievements and failures of medical informatics today (again considering the impact on the quality of healthcare)? What are the main challenges? Concerning the definition of medical informatics it is argued that one should not hide the link to basic informatics and, for that matter to computers, completely behind abstract definitions. After an analysis of the purposes of the definition of a discipline, a differentiated definition of the scope of medical informatics, rather general when concerning the field of scientific interest, more focused when concerning the practical (constructive) applications, is proposed. Contrasting Haux's chapter on achievements of medical informatics we concentrate on and analyse non fulfilled promises of medical informatics to derive lessons for the future and to propose 'generic' (or core) tasks of medical informatics to meet the challenges of the future. A set of 'internal challenges' of medical informatics to change priorities and attitudes within the discipline is put forward to enable medical informatics to meet the 'external challenges' listed by Haux.

  17. Enterprise Cloud Architecture for Chinese Ministry of Railway

    NASA Astrophysics Data System (ADS)

    Shan, Xumei; Liu, Hefeng

    Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.

  18. Scientific Use Cases for the Virtual Atomic and Molecular Data Center

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.

    2014-12-01

    VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.

  19. Applied virtual reality at the Research Triangle Institute

    NASA Technical Reports Server (NTRS)

    Montoya, R. Jorge

    1994-01-01

    Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.

  20. Different tracks for pathology informatics fellowship training: Experiences of and input from trainees in a large multisite fellowship program

    PubMed Central

    Levy, Bruce P.; McClintock, David S.; Lee, Roy E.; Lane, William J.; Klepeis, Veronica E.; Baron, Jason M.; Onozato, Maristela L.; Kim, JiYeon; Brodsky, Victor; Beckwith, Bruce; Kuo, Frank; Gilbertson, John R.

    2012-01-01

    Background: Pathology Informatics is a new field; a field that is still defining itself even as it begins the formalization, accreditation, and board certification process. At the same time, Pathology itself is changing in a variety of ways that impact informatics, including subspecialization and an increased use of data analysis. In this paper, we examine how these changes impact both the structure of Pathology Informatics fellowship programs and the fellows’ goals within those programs. Materials and Methods: As part of our regular program review process, the fellows evaluated the value and effectiveness of our existing fellowship tracks (Research Informatics, Clinical Two-year Focused Informatics, Clinical One-year Focused Informatics, and Clinical 1 + 1 Subspecialty Pathology and Informatics). They compared their education, informatics background, and anticipated career paths and analyzed them for correlations between those parameters and the fellowship track chosen. All current and past fellows of the program were actively involved with the project. Results: Fellows’ anticipated career paths correlated very well with the specific tracks in the program. A small set of fellows (Clinical – one or two year – Focused Informatics tracks) anticipated clinical careers primarily focused in informatics (Director of Informatics). The majority of the fellows, however, anticipated a career practicing in a Pathology subspecialty, using their informatics training to enhance that practice (Clinical 1 + 1 Subspecialty Pathology and Informatics Track). Significantly, all fellows on this track reported they would not have considered a Clinical Two-year Focused Informatics track if it was the only track offered. The Research and the Clinical One-year Focused Informatics tracks each displayed unique value for different situations. Conclusions: It seems a “one size fits all” fellowship structure does not fit the needs of the majority of potential Pathology Informatics candidates. Increasingly, these fellowships must be able to accommodate the needs of candidates anticipating a wide range of Pathology Informatics career paths, be able to accommodate Pathology's increasingly subspecialized structure, and do this in a way that respects the multiple fellowships needed to become a subspecialty pathologist and informatician. This is further complicated as Pathology Informatics begins to look outward and takes its place in the growing, and still ill-defined, field of Clinical Informatics, a field that is not confined to just one medical specialty, to one way of practicing medicine, or to one way of providing patient care. PMID:23024889

  1. Toward a Blended Ontology: Applying Knowledge Systems to ...

    EPA Pesticide Factsheets

    Bionanomedicine and environmental research share need common terms and ontologies. This study applied knowledge systems, data mining, and bibliometrics used in nano-scale ADME research from 1991 to 2011. The prominence of nano-ADME in environmental research began to exceed the publication rate in medical research in 2006. That trend appears to continue as a result of the growing products in commerce using nanotechnology, that is, 5-fold growth in number of countries with nanomaterials research centers. Funding for this research virtually did not exist prior to 2002, whereas today both medical and environmental research is funded globally. Key nanoparticle research began with pharmacology and therapeutic drug-delivery and contrasting agents, but the advances have found utility in the environmental research community. As evidence ultrafine aerosols and aquatic colloids research increased 6-fold, indicating a new emphasis on environmental nanotoxicology. User-directed expert elicitation from the engineering and chemical/ADME domains can be combined with appropriate Boolean logic and queries to define the corpus of nanoparticle interest. The study combined pharmacological expertise and informatics to identify the corpus by building logical conclusions and observations. Publication records informatics can lead to an enhanced understanding the connectivity between fields, as well as overcoming the differences in ontology between the fields. The National Exposure Resea

  2. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks

    PubMed Central

    Valdivieso Caraguay, Ángel Leonardo; García Villalba, Luis Javier

    2017-01-01

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors. PMID:28362346

  3. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  4. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks.

    PubMed

    Caraguay, Ángel Leonardo Valdivieso; Villalba, Luis Javier García

    2017-03-31

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.

  5. Brokering Capabilities for EarthCube - supporting Multi-disciplinary Earth Science Research

    NASA Astrophysics Data System (ADS)

    Jodha Khalsa, Siri; Pearlman, Jay; Nativi, Stefano; Browdy, Steve; Parsons, Mark; Duerr, Ruth; Pearlman, Francoise

    2013-04-01

    The goal of NSF's EarthCube is to create a sustainable infrastructure that enables the sharing of all geosciences data, information, and knowledge in an open, transparent and inclusive manner. Brokering of data and improvements in discovery and access are a key to data exchange and promotion of collaboration across the geosciences. In this presentation we describe an evolutionary process of infrastructure and interoperability development focused on participation of existing science research infrastructures and augmenting them for improved access. All geosciences communities already have, to a greater or lesser degree, elements of an information infrastructure in place. These elements include resources such as data archives, catalogs, and portals as well as vocabularies, data models, protocols, best practices and other community conventions. What is necessary now is a process for levering these diverse infrastructure elements into an overall infrastructure that provides easy discovery, access and utilization of resources across disciplinary boundaries. Brokers connect disparate systems with only minimal burdens upon those systems, and enable the infrastructure to adjust to new technical developments and scientific requirements as they emerge. Robust cyberinfrastructure will arise only when social, organizational, and cultural issues are resolved in tandem with the creation of technology-based services. This is a governance issue, but is facilitated by infrastructure capabilities that can impact the uptake of new interdisciplinary collaborations and exchange. Thus brokering must address both the cyberinfrastructure and computer technology requirements and also the social issues to allow improved cross-domain collaborations. This is best done through use-case-driven requirements and agile, iterative development methods. It is important to start by solving real (not hypothetical) information access and use problems via small pilot projects that develop capabilities targeted to specific communities. Brokering, as a critical capability for connecting systems, evolves over time through more connections and increased functionality. This adaptive process allows for continual evaluation as to how well science-driven use cases are being met. There is a near term, and possibly unique, opportunity through EarthCube and European e-Infrastructure projects to increase the impact and interconnectivity of projects. In the developments described in this presentation, brokering has been demonstrated to be an essential part of a robust, adaptive technical infrastructure and demonstration and user scenarios can address of both the governance and detailed implementation paths forward. The EarthCube Brokering roadmap proposes the expansion of brokering pilots into fully operational prototypes that work with the broader science and informatics communities to answer these questions, connect existing and emerging systems, and evolve the EarthCube infrastructure.

  6. The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

    PubMed Central

    Harris, Anthony D.; McGregor, Jessina C.; Perencevich, Eli N.; Furuno, Jon P.; Zhu, Jingkun; Peterson, Dan E.; Finkelstein, Joseph

    2006-01-01

    Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes. PMID:16221933

  7. The Health Information Technology Competencies Tool: Does It Translate for Nursing Informatics in the United States?

    PubMed

    Sipes, Carolyn; Hunter, Kathleen; McGonigle, Dee; West, Karen; Hill, Taryn; Hebda, Toni

    2017-12-01

    Information technology use in healthcare delivery mandates a prepared workforce. The initial Health Information Technology Competencies tool resulted from a 2-year transatlantic effort by experts from the US and European Union to identify approaches to develop skills and knowledge needed by healthcare workers. It was determined that competencies must be identified before strategies are established, resulting in a searchable database of more than 1000 competencies representing five domains, five skill levels, and more than 250 roles. Health Information Technology Competencies is available at no cost and supports role- or competency-based queries. Health Information Technology Competencies developers suggest its use for curriculum planning, job descriptions, and professional development.The Chamberlain College of Nursing informatics research team examined Health Information Technology Competencies for its possible application to our research and our curricular development, comparing it originally with the TIGER-based Assessment of Nursing Informatics Competencies and Nursing Informatics Competency Assessment of Level 3 and Level 4 tools, which examine informatics competencies at four levels of nursing practice. Additional analysis involved the 2015 Nursing Informatics: Scope and Standards of Practice. Informatics is a Health Information Technology Competencies domain, so clear delineation of nursing-informatics competencies was expected. Researchers found TIGER-based Assessment of Nursing Informatics Competencies and Nursing Informatics Competency Assessment of Level 3 and Level 4 differed from Health Information Technology Competencies 2016 in focus, definitions, ascribed competencies, and defined levels of expertise. When Health Information Technology Competencies 2017 was compared against the nursing informatics scope and standards, researchers found an increase in the number of informatics competencies but not to a significant degree. This is not surprising, given that Health Information Technology Competencies includes all healthcare workers, while the TIGER-based Assessment of Nursing Informatics Competencies and Nursing Informatics Competency Assessment of Level 3 and Level 4 tools and the American Nurses Association Nursing Informatics: Scope and Standards of Practice are nurse specific. No clear cross mapping across these tools and the standards of nursing informatics practice exists. Further examination and review are needed to translate Health Information Technology Competencies as a viable tool for nursing informatics use in the US.

  8. Case study: impact of technology investment on lead discovery at Bristol-Myers Squibb, 1998-2006.

    PubMed

    Houston, John G; Banks, Martyn N; Binnie, Alastair; Brenner, Stephen; O'Connell, Jonathan; Petrillo, Edward W

    2008-01-01

    We review strategic approaches taken over an eight-year period at BMS to implement new high-throughput approaches to lead discovery. Investments in compound management infrastructure and chemistry library production capability allowed significant growth in the size, diversity and quality of the BMS compound collection. Screening platforms were upgraded with robust automated technology to support miniaturized assay formats, while workflows and information handling technologies were streamlined for improved performance. These technology changes drove the need for a supporting organization in which critical engineering, informatics and scientific skills were more strongly represented. Taken together, these investments led to significant improvements in speed and productivity as well a greater impact of screening campaigns on the initiation of new drug discovery programs.

  9. Using the LOINC Semantic Structure to Integrate Community-based Survey Items into a Concept-based Enterprise Data Dictionary to Support Comparative Effectiveness Research.

    PubMed

    Co, Manuel C; Boden-Albala, Bernadette; Quarles, Leigh; Wilcox, Adam; Bakken, Suzanne

    2012-01-01

    In designing informatics infrastructure to support comparative effectiveness research (CER), it is necessary to implement approaches for integrating heterogeneous data sources such as clinical data typically stored in clinical data warehouses and those that are normally stored in separate research databases. One strategy to support this integration is the use of a concept-oriented data dictionary with a set of semantic terminology models. The aim of this paper is to illustrate the use of the semantic structure of Clinical LOINC (Logical Observation Identifiers, Names, and Codes) in integrating community-based survey items into the Medical Entities Dictionary (MED) to support the integration of survey data with clinical data for CER studies.

  10. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  11. Leveraging EHR Data for Outcomes and Comparative Effectiveness Research in Oncology

    PubMed Central

    Harris, Marcelline R.; Buyuktur, Ayse G.; Clark, Patricia M.; An, Lawrence C.; Hanauer, David A.

    2012-01-01

    Along with the increasing adoption of electronic health records (EHRs) are expectations that data collected within EHRs will be readily available for outcomes and comparative effectiveness research. Yet the ability to effectively share and reuse data depends on implementing and configuring EHRs with these goals in mind from the beginning. Data sharing and integration must be planned both locally as well as nationally. The rich data transmission and semantic infrastructure developed by the National Cancer Institute (NCI) for research provides an excellent example of moving beyond paper-based paradigms and exploiting the power of semantically robust, network-based systems, and engaging both domain and informatics expertise. Similar efforts are required to address current challenges in sharing EHR data. PMID:22948276

  12. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  13. STEM Progress in Katrina's Wake

    ERIC Educational Resources Information Center

    Gonzales, Dana

    2008-01-01

    When Hurricane Katrina hit New Orleans in 2005, it caused a devastating impact on the Crescent City's public education system. The devastating storm and its aftermath completely wiped out the educational infrastructure of the New Orleans Public Schools, making one of the country's largest metropolitan school districts virtually disappear. Two…

  14. Chaos Breeds Life: Finding Opportunities for Library Advancement during a Period of Collection Schizophrenia.

    ERIC Educational Resources Information Center

    Neal, James G.

    1999-01-01

    Examines the changes that are affecting academic library collection development. Highlights include computer technology; digital information; networking; virtual reality; hypertext; fair use and copyrights; technological infrastructure; digital libraries; information policy; academic and scholarly publishing; and experiences at the Johns Hopkins…

  15. Metropolis revisited: the evolving role of librarians in informatics education for the health professions

    PubMed Central

    King, Samuel B.; Lapidus, Mariana

    2015-01-01

    Objective: The authors' goal was to assess changes in the role of librarians in informatics education from 2004 to 2013. This is a follow-up to “Metropolis Redux: The Unique Importance of Library Skills in Informatics,” a 2004 survey of informatics programs. Methods: An electronic survey was conducted in January 2013 and sent to librarians via the MEDLIB-L email discussion list, the library section of the American Association of Colleges of Pharmacy, the Medical Informatics Section of the Medical Library Association, the Information Technology Interest Group of the Association of College and Research Libraries/New England Region, and various library directors across the country. Results: Librarians from fifty-five institutions responded to the survey. Of these respondents, thirty-four included librarians in nonlibrary aspects of informatics training. Fifteen institutions have librarians participating in leadership positions in their informatics programs. Compared to the earlier survey, the role of librarians has evolved. Conclusions: Librarians possess skills that enable them to participate in informatics programs beyond a narrow library focus. Librarians currently perform significant leadership roles in informatics education. There are opportunities for librarian interdisciplinary collaboration in informatics programs. Implications: Informatics is much more than the study of technology. The information skills that librarians bring to the table enrich and broaden the study of informatics in addition to adding value to the library profession itself. PMID:25552939

  16. Metropolis revisited: the evolving role of librarians in informatics education for the health professions.

    PubMed

    King, Samuel B; Lapidus, Mariana

    2015-01-01

    The authors' goal was to assess changes in the role of librarians in informatics education from 2004 to 2013. This is a follow-up to "Metropolis Redux: The Unique Importance of Library Skills in Informatics," a 2004 survey of informatics programs. An electronic survey was conducted in January 2013 and sent to librarians via the MEDLIB-L email discussion list, the library section of the American Association of Colleges of Pharmacy, the Medical Informatics Section of the Medical Library Association, the Information Technology Interest Group of the Association of College and Research Libraries/New England Region, and various library directors across the country. Librarians from fifty-five institutions responded to the survey. Of these respondents, thirty-four included librarians in nonlibrary aspects of informatics training. Fifteen institutions have librarians participating in leadership positions in their informatics programs. Compared to the earlier survey, the role of librarians has evolved. Librarians possess skills that enable them to participate in informatics programs beyond a narrow library focus. Librarians currently perform significant leadership roles in informatics education. There are opportunities for librarian interdisciplinary collaboration in informatics programs. Informatics is much more than the study of technology. The information skills that librarians bring to the table enrich and broaden the study of informatics in addition to adding value to the library profession itself.

  17. EVER-EST: European Virtual Environment for Research in Earth Science Themes

    NASA Astrophysics Data System (ADS)

    Glaves, H.; Albani, M.

    2016-12-01

    EVER-EST is an EC Horizon 2020 project having the goal to develop a Virtual Research Environment (VRE) providing a state-of-the-art solution to allow Earth Scientists to preserve their work and publications for reference and future reuse, and to share with others. The availability of such a solution, based on an innovative concept and state of art technology infrastructure, will considerably enhance the quality of how Earth Scientists work together within their own institution and also across other organizations, regions and countries. The concept of Research Objects (ROs), used in the Earth Sciences for the first time, will form the backbone of the EVER-EST VRE infrastructure. ROs will enhance the ability to preserve, re-use and share entire or individual parts of scientific workflows and all the resources related to a specific scientific investigation. These ROs will also potentially be used as part of the scholarly publication process. EVER-EST is building on technologies developed during almost 15 years of research on Earth Science data management infrastructures. The EVER-EST VRE Service Oriented Architecture is being meticulously designed to accommodate at best the requirements of a wide range of Earth Science communities and use cases: focus is put on common requirements and on minimising the level of complexity in the EVER-EST VRE to ensure future sustainability within the user communities beyond the end of the project. The EVER-EST VRE will be validated through its customisation and deployment by four Virtual Research Communities (VRCs) from different Earth Science disciplines and will support enhanced interaction between data providers and scientists in the Earth Science domain. User community will range from bio-marine researchers (Sea Monitoring use case), to common foreign and security policy institutions and stakeholders (Land Monitoring for Security use case), natural hazards forecasting systems (Natural Hazards use case), and disaster and risk management teams (Supersites use case). The EVER-EST project will coordinate and collaborate with other relevant initiatives worldwide mainly through the Research Data Alliance (RDA) Virtual Research Environments interest group (VRE-IG).

  18. Application of the dynamically allocated virtual clustering management system to emulated tactical network experimentation

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin

    2014-06-01

    The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.

  19. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    NASA Astrophysics Data System (ADS)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT Science Cloud for Earth and space science. In 2003 56 refereed papers were published. At the end, we introduce a couple of successful results of Earth and space sciences using these three techniques carried out on the NICT Sciences Cloud. [1] http://sc-web.nict.go.jp

  20. Informatics Competencies for Nursing and Healthcare Leaders

    PubMed Central

    Westra, Bonnie L.; Delaney, Connie W.

    2008-01-01

    Historically, educational preparation did not address informatics competencies; thus managers, administrators, or executives may not be prepared to use or lead change in the use of health information technologies. A number of resources for informatics competencies exist, however, a comprehensive list addressing the unique knowledge and skills required in the role of a manager or administrator was not found. The purpose of this study was to develop informatics competencies for nursing leaders. A synthesis of the literature and a Delphi approach using three rounds of surveys with an expert panel resulted in identification of informatics competencies for nursing leaders that address computer skills, informatics knowledge, and informatics skills. PMID:18998803

  1. The Anesthesiologist-Informatician: A Survey of Physicians Board-Certified in Both Anesthesiology and Clinical Informatics.

    PubMed

    Poterack, Karl A; Epstein, Richard H; Dexter, Franklin

    2018-03-12

    All 36 physicians board-certified in both anesthesiology and clinical informatics as of January 1, 2016, were surveyed via e-mail, with 26 responding. Although most (25/26) generally expressed satisfaction with the clinical informatics boards, and view informatics expertise as important to anesthesiology, most (24/26) thought it unlikely or highly unlikely that substantial numbers of anesthesiology residents would pursue clinical informatics fellowships. Anesthesiologists wishing to qualify for the clinical informatics board examination under the practice pathway need to devote a substantive amount of worktime to informatics. There currently are options outside of formal fellowship training to acquire the knowledge to pass.

  2. Building a Culture of Health Informatics Innovation and Entrepreneurship: A New Frontier.

    PubMed

    Househ, Mowafa; Alshammari, Riyad; Almutairi, Mariam; Jamal, Amr; Alshoaib, Saleh

    2015-01-01

    Entrepreneurship and innovation within the health informatics (HI) scientific community are relatively sluggish when compared to other disciplines such as computer science and engineering. Healthcare in general, and specifically, the health informatics scientific community needs to embrace more innovative and entrepreneurial practices. In this paper, we explore the concepts of innovation and entrepreneurship as they apply to the health informatics scientific community. We also outline several strategies to improve the culture of innovation and entrepreneurship within the health informatics scientific community such as: (I) incorporating innovation and entrepreneurship in health informatics education; (II) creating strong linkages with industry and healthcare organizations; (III) supporting national health innovation and entrepreneurship competitions; (IV) creating a culture of innovation and entrepreneurship within healthcare organizations; (V) developing health informatics policies that support innovation and entrepreneurship based on internationally recognized standards; and (VI) develop an health informatics entrepreneurship ecosystem. With these changes, we conclude that embracing health innovation and entrepreneurship may be more readily accepted over the long-term within the health informatics scientific community.

  3. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  4. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  5. A Web-Based Virtual Classroom System Model

    ERIC Educational Resources Information Center

    Adewale, Olumide S.; Ibam, Emmanuel O.; Alese, B. K.

    2012-01-01

    The population of students all over the world is growing without a proportionate increase in teaching/learning resources/infrastructure. There is also much quest for learning in an environment that provides equal opportunities to all learners. The need to provide an equal opportunity learning environment that will hitherto improve the system of…

  6. Working Group on Virtual Data Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2016-03-07

    This report is the outcome of a workshop commissioned by the U.S. Department of Energy’s (DOE) Climate and Environmental Sciences Division (CESD) to examine current and future data infrastructure requirements foundational for achieving CESD scientific mission goals in advancing a robust, predictive understanding of Earth’s climate and environmental systems.

  7. Time to Discover NFM

    ERIC Educational Resources Information Center

    Schaffer, Greg

    2007-01-01

    Managing resource usage and data delivery with virtualization devices is a staple of many of today's data infrastructures. By breaking the traditional direct physical access and inserting an abstraction layer, what one sees is what he/she gets, but the mechanics of delivery may be quite different. The reason for the increase in virtualization…

  8. Commentary: The Tyranny of Time and the Reality Principle

    ERIC Educational Resources Information Center

    Gersten, Russell

    2016-01-01

    Each of the five articles in this special issue gets "into the weeds" in terms of studying actual classroom or school implementation of evidence-based or promising practices. Virtually all confront the issue of logistics and establishing an infrastructure for ensuring adequate implementation. In general, those studies that ask teachers…

  9. Biomedical informatics and translational medicine.

    PubMed

    Sarkar, Indra Neil

    2010-02-26

    Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics) may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records) and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians") can be essential members of translational medicine teams.

  10. Publication trends in the medical informatics literature: 20 years of "Medical Informatics" in MeSH

    PubMed Central

    2009-01-01

    Background The purpose of this study is to identify publication output, and research areas, as well as descriptively and quantitatively characterize the field of medical informatics through publication trend analysis over a twenty year period (1987–2006). Methods A bibliometric analysis of medical informatics citations indexed in Medline was performed using publication trends, journal frequency, impact factors, MeSH term frequencies and characteristics of citations. Results There were 77,023 medical informatics articles published during this 20 year period in 4,644 unique journals. The average annual article publication growth rate was 12%. The 50 identified medical informatics MeSH terms are rarely assigned together to the same document and are almost exclusively paired with a non-medical informatics MeSH term, suggesting a strong interdisciplinary trend. Trends in citations, journals, and MeSH categories of medical informatics output for the 20-year period are summarized. Average impact factor scores and weighted average impact factor scores increased over the 20-year period with two notable growth periods. Conclusion There is a steadily growing presence and increasing visibility of medical informatics literature over the years. Patterns in research output that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline, and highlight specific journals in which the medical informatics literature appears most frequently, including general medical journals as well as informatics-specific journals. PMID:19159472

  11. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  12. Health impact assessment needs in south-east Asian countries.

    PubMed Central

    Caussy, Deoraj; Kumar, Priti; Than Sein, U.

    2003-01-01

    A situation analysis was undertaken to assess impediments to health impact assessment (HIA) in the South-East Asia Region of WHO (SEARO). The countries of the region were assessed on the policy framework and procedures for HIA, existing infrastructure required to support HIA, the capacity for undertaking HIA, and the potential for intersectoral collaboration. The findings show that environmental impact assessment (EIA) is being used implicitly as a substitute for HIA, which is not explicitly or routinely conducted in virtually all countries of the Region. Therefore, policy, infrastructure, capacity, and intersectoral collaboration need strengthening for the routine implementation of HIA. PMID:12894329

  13. A survey of public health and consumer health informatics programmes and courses in Canadian universities and colleges.

    PubMed

    Arocha, Jose F; Hoffman-Goetz, Laurie

    2012-12-01

    As information technology becomes more widely used by people for health-care decisions, training in consumer and public health informatics will be important for health practitioners working directly with the public. Using information from 74 universities and colleges across Canada, we searched websites and online calendars for programmes (undergraduate, graduate) regarding availability and scope of education in programmes, courses and topics geared to public health and/or consumer health informatics. Of the 74 institutions searched, 31 provided some content relevant to health informatics (HI) and 8 institutions offered full HI-related programmes. Of these 8 HI programmes, only 1 course was identified with content relevant to public health informatics and 1 with content about consumer health informatics. Some institutions (n  =  22) - which do not offer HI-degree programmes - provide health informatics-related courses, including one on consumer health informatics. We found few programmes, courses or topic areas within courses in Canadian universities and colleges that focus on consumer or public health informatics education. Given the increasing emphasis on personal responsibility for health and health-care decision-making, skills training for health professionals who help consumers navigate the Internet should be considered in health informatics education.

  14. Advanced 3-dimensional planning in neurosurgery.

    PubMed

    Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni

    2013-01-01

    During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.

  15. PearlTrees web-based interface for teaching informatics in the radiology residency

    NASA Astrophysics Data System (ADS)

    Licurse, Mindy Y.; Cook, Tessa S.

    2014-03-01

    Radiology and imaging informatics education have rapidly evolved over the past few decades. With the increasing recognition that future growth and maintenance of radiology practices will rely heavily on radiologists with fundamentally sound informatics skills, the onus falls on radiology residency programs to properly implement and execute an informatics curriculum. In addition, the American Board of Radiology may choose to include even more informatics on the new board examinations. However, the resources available for didactic teaching and guidance most especially at the introductory level are widespread and varied. Given the breadth of informatics, a centralized web-based interface designed to serve as an adjunct to standardized informatics curriculums as well as a stand-alone for other interested audiences is desirable. We present the development of a curriculum using PearlTrees, an existing web-interface based on the concept of a visual interest graph that allows users to collect, organize, and share any URL they find online as well as to upload photos and other documents. For our purpose, the group of "pearls" includes informatics concepts linked by appropriate hierarchal relationships. The curriculum was developed using a combination of our institution's current informatics fellowship curriculum, the Practical Imaging Informatics textbook1 and other useful online resources. After development of the initial interface and curriculum has been publicized, we anticipate that involvement by the informatics community will help promote collaborations and foster mentorships at all career levels.

  16. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    NASA Astrophysics Data System (ADS)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  17. Virtual Induction Loops Based on Cooperative Vehicular Communications

    PubMed Central

    Gramaglia, Marco; Bernardos, Carlos J.; Calderon, Maria

    2013-01-01

    Induction loop detectors have become the most utilized sensors in traffic management systems. The gathered traffic data is used to improve traffic efficiency (i.e., warning users about congested areas or planning new infrastructures). Despite their usefulness, their deployment and maintenance costs are expensive. Vehicular networks are an emerging technology that can support novel strategies for ubiquitous and more cost-effective traffic data gathering. In this article, we propose and evaluate VIL (Virtual Induction Loop), a simple and lightweight traffic monitoring system based on cooperative vehicular communications. The proposed solution has been experimentally evaluated through simulation using real vehicular traces. PMID:23348033

  18. The development, deployment, and impact of the virtual observatory, Part II

    NASA Astrophysics Data System (ADS)

    Hanisch, R. J.

    2015-06-01

    This is the second special issue of Astronomy and Computing devoted to the Virtual Observatory, and we again see a combination of papers covering various aspects of the VO, from infrastructure to applications to programmatics. The critical role of data models is described by Louys, and the method by which applications communicate amongst each other through the Simple Applications Messaging Protocol (SAMP) is described by Taylor et al. Demleitner et al. explain the client interfaces to the VO registry, that is, how applications developers can query the registry for information about VO-compliant data collections and services.1

  19. Long distance education for croatian nurses with open source software.

    PubMed

    Radenovic, Aleksandar; Kalauz, Sonja

    2006-01-01

    Croatian Nursing Informatics Association (CNIA) has been established as result of continuing work on promoting nursing informatics in Croatia. Main goals of CNIA are promoting nursing informatics and education of nurses about nursing informatics and using information technology in nursing process. CNIA in start of work is developed three courses from nursing informatics all designed with support of long distance education with open source software. Courses are: A - 'From Data to Wisdom', B - 'Introduction to Nursing Informatics' and C - 'Nursing Informatics I'. Courses A and B are obligatory for C course. Technology used to implement these online courses is based on the open source Learning Management System (LMS), Claroline, free online collaborative learning platform. Courses are divided in two modules/days. First module/day participants have classical approach to education and second day with E-learning from home. These courses represent first courses from nursing informatics' and first long distance education for nurses also.

  20. The nursing informatics workforce: who are they and what do they do?

    PubMed

    Murphy, Judy

    2011-01-01

    Nursing informatics has evolved into an integral part of health care delivery and a differentiating factor in the selection, implementation, and evaluation of health IT that supports safe, high-quality, patient-centric care. New nursing informatics workforce data reveal changing dynamics in clinical experience, job responsibilities, applications, barriers to success, information, and compensation and benefits. In addition to the more traditional informatics nurse role, a new position has begun to emerge in the health care C-suite with the introduction of the chief nursing informatics officer (CNIO). The CNIO is the senior informatics nurse guiding the implementation and optimization of HIT systems for an organization. With their fused clinical and informatics background, informatics nurses and CNIOs are uniquely positioned to help with "meaningful use" initiatives which are so important to changing the face of health care in the United States.

  1. Desiderata for Healthcare Integrated Data Repositories Based on Architectural Comparison of Three Public Repositories

    PubMed Central

    Huser, Vojtech; Cimino, James J.

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366

  2. Desiderata for healthcare integrated data repositories based on architectural comparison of three public repositories.

    PubMed

    Huser, Vojtech; Cimino, James J

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.

  3. Paradigm Shift or Annoying Distraction

    PubMed Central

    Spallek, H.; O’Donnell, J.; Clayton, M.; Anderson, P.; Krueger, A.

    2010-01-01

    Web 2.0 technologies, known as social media, social technologies or Web 2.0, have emerged into the mainstream. As they grow, these new technologies have the opportunity to influence the methods and procedures of many fields. This paper focuses on the clinical implications of the growing Web 2.0 technologies. Five developing trends are explored: information channels, augmented reality, location-based mobile social computing, virtual worlds and serious gaming, and collaborative research networks. Each trend is discussed based on their utilization and pattern of use by healthcare providers or healthcare organizations. In addition to explorative research for each trend, a vignette is presented which provides a future example of adoption. Lastly each trend lists several research challenge questions for applied clinical informatics. PMID:23616830

  4. Street Level Hydrology: An Urban Application of the WRF-Hydro Framework in Denver, Colorado

    NASA Astrophysics Data System (ADS)

    Read, L.; Hogue, T. S.; Salas, F. R.; Gochis, D.

    2015-12-01

    Urban flood modeling at the watershed scale carries unique challenges in routing complexity, data resolution, social and political issues, and land surface - infrastructure interactions. The ability to accurately trace and predict the flow of water through the urban landscape enables better emergency response management, floodplain mapping, and data for future urban infrastructure planning and development. These services are of growing importance as urban population is expected to continue increasing by 1.84% per year for the next 25 years, increasing the vulnerability of urban regions to damages and loss of life from floods. Although a range of watershed-scale models have been applied in specific urban areas to examine these issues, there is a trend towards national scale hydrologic modeling enabled by supercomputing resources to understand larger system-wide hydrologic impacts and feedbacks. As such it is important to address how urban landscapes can be represented in large scale modeling processes. The current project investigates how coupling terrain and infrastructure routing can improve flow prediction and flooding events over the urban landscape. We utilize the WRF-Hydro modeling framework and a high-resolution terrain routing grid with the goal of compiling standard data needs necessary for fine scale urban modeling and dynamic flood forecasting in the urban setting. The city of Denver is selected as a case study, as it has experienced several large flooding events in the last five years and has an urban annual population growth rate of 1.5%, one of the highest in the U.S. Our work highlights the hydro-informatic challenges associated with linking channel networks and drainage infrastructure in an urban area using the WRF-Hydro modeling framework and high resolution urban models for short-term flood prediction.

  5. Toward a science of learning systems: a research agenda for the high-functioning Learning Health System.

    PubMed

    Friedman, Charles; Rubin, Joshua; Brown, Jeffrey; Buntin, Melinda; Corn, Milton; Etheredge, Lynn; Gunter, Carl; Musen, Mark; Platt, Richard; Stead, William; Sullivan, Kevin; Van Houweling, Douglas

    2015-01-01

    The capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale--marrying technology, process, and policy--is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges. The National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11-12 April 2013. The workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges. The intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a 'new science of learning systems', create an important agenda for informatics and related disciplines. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  6. MIMI: multimodality, multiresource, information integration environment for biomedical core facilities.

    PubMed

    Szymanski, Jacek; Wilson, David L; Zhang, Guo-Qiang

    2009-10-01

    The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administratively, a core facility must be able to manage the scheduling of its equipment and to maintain a flexible and effective billing system to track material, resource, and personnel costs and charge for services to sustain its operation. It must also have the ability to regularly monitor the usage and performance of its equipment and to provide summary statistics on resources spent on different categories of research. To address these informatics challenges, we introduce a comprehensive system called MIMI (multimodality, multiresource, information integration environment) that integrates the administrative and scientific support of a core facility into a single web-based environment. We report the design, development, and deployment experience of a baseline MIMI system at an imaging core facility and discuss the general applicability of such a system in other types of core facilities. These initial results suggest that MIMI will be a unique, cost-effective approach to addressing the informatics infrastructure needs of core facilities and similar research laboratories.

  7. Crowd Sourcing to Improve Urban Stormwater Management

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Band, L. E.; Heidari Haratmeh, B.; Law, N. L.; Leonard, L. N.; Rai, A.

    2017-12-01

    Over half of the world's population currently lives in urban areas, a number predicted to grow to 60 percent by 2030. Urban areas face unprecedented and growing challenges that threaten society's long-term wellbeing, including poverty; chronic health problems; widespread pollution and resource degradation; and increased natural disasters. These are "wicked" problems involving "systems of systems" that require unprecedented information sharing and collaboration across disciplines and organizational boundaries. Cities are recognizing that the increasing stream of data and information ("Big Data"), informatics, and modeling can support rapid advances on these challenges. Nonetheless, information technology solutions can only be effective in addressing these challenges through deeply human and systems perspectives. A stakeholder-driven approach ("crowd sourcing") is needed to develop urban systems that address multiple needs, such as parks that capture and treat stormwater while improving human and ecosystem health and wellbeing. We have developed informatics- and Cloud-based collaborative methods that enable crowd sourcing of green stormwater infrastructure (GSI: rain gardens, bioswales, trees, etc.) design and management. The methods use machine learning, social media data, and interactive design tools (called IDEAS-GI) to identify locations and features of GSI that perform best on a suite of objectives, including life cycle cost, stormwater volume reduction, and air pollution reduction. Insights will be presented on GI features that best meet stakeholder needs and are therefore most likely to improve human wellbeing and be well maintained.

  8. Accessing and integrating data and knowledge for biomedical research.

    PubMed

    Burgun, A; Bodenreider, O

    2008-01-01

    To review the issues that have arisen with the advent of translational research in terms of integration of data and knowledge, and survey current efforts to address these issues. Using examples form the biomedical literature, we identified new trends in biomedical research and their impact on bioinformatics. We analyzed the requirements for effective knowledge repositories and studied issues in the integration of biomedical knowledge. New diagnostic and therapeutic approaches based on gene expression patterns have brought about new issues in the statistical analysis of data, and new workflows are needed are needed to support translational research. Interoperable data repositories based on standard annotations, infrastructures and services are needed to support the pooling and meta-analysis of data, as well as their comparison to earlier experiments. High-quality, integrated ontologies and knowledge bases serve as a source of prior knowledge used in combination with traditional data mining techniques and contribute to the development of more effective data analysis strategies. As biomedical research evolves from traditional clinical and biological investigations towards omics sciences and translational research, specific needs have emerged, including integrating data collected in research studies with patient clinical data, linking omics knowledge with medical knowledge, modeling the molecular basis of diseases, and developing tools that support in-depth analysis of research data. As such, translational research illustrates the need to bridge the gap between bioinformatics and medical informatics, and opens new avenues for biomedical informatics research.

  9. IT infrastructure in the era of imaging 3.0.

    PubMed

    McGinty, Geraldine B; Allen, Bibb; Geis, J Raymond; Wald, Christoph

    2014-12-01

    Imaging 3.0 is a blueprint for the future of radiology modeled after the description of Web 3.0 as "more connected, more open, and more intelligent." Imaging 3.0 involves radiologists' using their expertise to manage all aspects of imaging care to improve patient safety and outcomes and to deliver high-value care. IT tools are critical elements and drivers of success as radiologists embrace the concepts of Imaging 3.0. Organized radiology, specifically the ACR, is the natural convener and resource for the development of this Imaging 3.0 toolkit. The ACR's new Imaging 3.0 Informatics Committee is actively working to develop the informatics tools radiologists need to improve efficiency, deliver more value, and provide quantitative ways to demonstrate their value in new health care delivery and payment systems. This article takes each step of the process of delivering high-value Imaging 3.0 care and outlines the tools available as well as additional resources available to support practicing radiologists. From the moment when imaging is considered through the delivery of a meaningful and actionable report that is communicated to the referring clinician and, when appropriate, to the patient, Imaging 3.0 IT tools will enable radiologists to position themselves as vital constituents in cost-effective, high-value health care. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  11. Informatics can identify systemic sclerosis (SSc) patients at risk for scleroderma renal crisis.

    PubMed

    Redd, Doug; Frech, Tracy M; Murtaugh, Maureen A; Rhiannon, Julia; Zeng, Qing T

    2014-10-01

    Electronic medical records (EMR) provide an ideal opportunity for the detection, diagnosis, and management of systemic sclerosis (SSc) patients within the Veterans Health Administration (VHA). The objective of this project was to use informatics to identify potential SSc patients in the VHA that were on prednisone, in order to inform an outreach project to prevent scleroderma renal crisis (SRC). The electronic medical data for this study came from Veterans Informatics and Computing Infrastructure (VINCI). For natural language processing (NLP) analysis, a set of retrieval criteria was developed for documents expected to have a high correlation to SSc. The two annotators reviewed the ratings to assemble a single adjudicated set of ratings, from which a support vector machine (SVM) based document classifier was trained. Any patient having at least one document positively classified for SSc was considered positive for SSc and the use of prednisone≥10mg in the clinical document was reviewed to determine whether it was an active medication on the prescription list. In the VHA, there were 4272 patients that have a diagnosis of SSc determined by the presence of an ICD-9 code. From these patients, 1118 patients (21%) had the use of prednisone≥10mg. Of these patients, 26 had a concurrent diagnosis of hypertension, thus these patients should not be on prednisone. By the use of natural language processing (NLP) an additional 16,522 patients were identified as possible SSc, highlighting that cases of SSc in the VHA may exist that are unidentified by ICD-9. A 10-fold cross validation of the classifier resulted in a precision (positive predictive value) of 0.814, recall (sensitivity) of 0.973, and f-measure of 0.873. Our study demonstrated that current clinical practice in the VHA includes the potentially dangerous use of prednisone for veterans with SSc. This present study also suggests there may be many undetected cases of SSc and NLP can successfully identify these patients. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619

  13. Biomedical informatics training at the University of Wisconsin-Madison.

    PubMed

    Severtson, D J; Pape, L; Page, C D; Shavlik, J W; Phillips, G N; Flatley Brennan, P

    2007-01-01

    The purpose of this paper is to describe biomedical informatics training at the University of Wisconsin-Madison (UW-Madison). We reviewed biomedical informatics training, research, and faculty/trainee participation at UW-Madison. There are three primary approaches to training 1) The Computation & Informatics in Biology & Medicine Training Program, 2) formal biomedical informatics offered by various campus departments, and 3) individualized programs. Training at UW-Madison embodies the features of effective biomedical informatics training recommended by the American College of Medical Informatics that were delineated as: 1) curricula that integrate experiences among computational sciences and application domains, 2) individualized and interdisciplinary cross-training among a diverse cadre of trainees to develop key competencies that he or she does not initially possess, 3) participation in research and development activities, and 4) exposure to a range of basic informational and computational sciences. The three biomedical informatics training approaches immerse students in multidisciplinary training and education that is supported by faculty trainers who participate in collaborative research across departments. Training is provided across a range of disciplines and available at different training stages. Biomedical informatics training at UW-Madison illustrates how a large research University, with multiple departments across biological, computational and health fields, can provide effective and productive biomedical informatics training via multiple bioinformatics training approaches.

  14. Assessing the impact of transitions from centralised to decentralised water solutions on existing infrastructures – Integrated city-scale analysis with VIBe

    PubMed Central

    Sitzenfrei, Robert; Möderl, Michael; Rauch, Wolfgang

    2013-01-01

    Traditional urban water management relies on central organised infrastructure, the most important being the drainage network and the water distribution network. To meet upcoming challenges such as climate change, the rapid growth and shrinking of cities and water scarcity, water infrastructure needs to be more flexible, adaptable and sustainable (e.g., sustainable urban drainage systems, SUDS; water sensitive urban design, WSUD; low impact development, LID; best management practice, BMP). The common feature of all solutions is the push from a central solution to a decentralised solution in urban water management. This approach opens up a variety of technical and socio-economic issues, but until now, a comprehensive assessment of the impact has not been made. This absence is most likely attributable to the lack of case studies, and the availability of adequate models is usually limited because of the time- and cost-intensive preparation phase. Thus, the results of the analysis are based on a few cases and can hardly be transferred to other boundary conditions. VIBe (Virtual Infrastructure Benchmarking) is a tool for the stochastic generation of urban water systems at the city scale for case study research. With the generated data sets, an integrated city-scale analysis can be performed. With this approach, we are able to draw conclusions regarding the technical effect of the transition from existing central to decentralised urban water systems. In addition, it is shown how virtual data sets can assist with the model building process. A simple model to predict the shear stress performance due to changes in dry weather flow production is developed and tested. PMID:24210508

  15. Building sustainable multi-functional prospective electronic clinical data systems.

    PubMed

    Randhawa, Gurvaneet S; Slutsky, Jean R

    2012-07-01

    A better alignment in the goals of the biomedical research enterprise and the health care delivery system can help fill the large gaps in our knowledge of the impact of clinical interventions on patient outcomes in the real world. There are several initiatives underway to align the research priorities of patients, providers, researchers, and policy makers. These include Agency for Healthcare Research and Quality (AHRQ)-supported projects to build flexible prospective clinical electronic data infrastructure that meet the needs of these diverse users. AHRQ has previously supported the creation of 2 distributed research networks as a new approach to conduct comparative effectiveness research (CER) while protecting a patient's confidential information and the proprietary needs of a clinical organization. It has applied its experience in building these networks in directing the American Recovery and Reinvestment Act funds for CER to support new clinical electronic infrastructure projects that can be used for several purposes including CER, quality improvement, clinical decision support, and disease surveillance. In addition, AHRQ has funded a new Electronic Data Methods forum to advance the methods in clinical informatics, research analytics, and governance by actively engaging investigators from the American Recovery and Reinvestment Act-funded projects and external stakeholders.

  16. Wyoming Landscape Conservation Initiative data management and integration

    USGS Publications Warehouse

    Latysh, Natalie; Bristol, R. Sky

    2011-01-01

    Six Federal agencies, two State agencies, and two local entities formally support the Wyoming Landscape Conservation Initiative (WLCI) and work together on a landscape scale to manage fragile habitats and wildlife resources amidst growing energy development in southwest Wyoming. The U.S. Geological Survey (USGS) was tasked with implementing targeted research and providing scientific information about southwest Wyoming to inform the development of WLCI habitat enhancement and restoration projects conducted by land management agencies. Many WLCI researchers and decisionmakers representing the Bureau of Land Management, U.S. Fish and Wildlife Service, the State of Wyoming, and others have overwhelmingly expressed the need for a stable, robust infrastructure to promote sharing of data resources produced by multiple entities, including metadata adequately describing the datasets. Descriptive metadata facilitates use of the datasets by users unfamiliar with the data. Agency representatives advocate development of common data handling and distribution practices among WLCI partners to enhance availability of comprehensive and diverse data resources for use in scientific analyses and resource management. The USGS Core Science Informatics (CSI) team is developing and promoting data integration tools and techniques across USGS and partner entity endeavors, including a data management infrastructure to aid WLCI researchers and decisionmakers.

  17. Beyond basic citation—What to identify, when, and why

    NASA Astrophysics Data System (ADS)

    Parsons, M. A.

    2015-12-01

    Persistent identifiers (and locators) have emerged as a critical component in designing and implementing information systems and networks. This is especially evident in the use of the Digital Object Identifier in association with formal bibliographic citation of literature and increasingly of data sets. Indeed, the principles and methods of data citation have been a hot topic in the informatics community over the last decade or so. To date the focus has typically been on closely linking data sets to associated literature and generally emulating bibliographic-style citation. To design a sustainable, trusted data infrastructure, however, requires us to unambiguously reference many things in many ways, be they data, software, instruments, methods, or people. Design of this infrastructure also requires us to consider the entire data lifecycle and when important elements come into play and need to be identified. This paper will advocate an "ecological" model of data sharing that takes a more holistic perspective than many traditional data publication approaches. It will explore a variety of use cases around what elements of an information ecosystem need to be unambiguously identified and located, at what point in the data production process, and to what explicit purpose.

  18. Twenty Years of Society of Medical Informatics of B&H and the Journal Acta Informatica Medica

    PubMed Central

    Masic, Izet

    2012-01-01

    In 2012, Health/Medical informatics profession celebrates five jubilees in Bosnia and Herzegovina: a) Thirty five years from the introduction of the first automatic manipulation of data; b) Twenty five years from establishing Society for Medical Informatics BiH; c) Twenty years from establishing scientific and professional journal of the Society for Medical Informatics of Bosnia and Herzegovina „Acta Informatica Medica“; d) Twenty years from establishing first Cathdra for Medical Informatics on biomedical faculties in Bosnia and Herzegovina and e) Ten years from the introduction of “Distance learning” in medical curriculum. All of the five mentioned activities in the area of Medical informatics had special importance and gave appropriate contribution in the development of Health/Medical informatics in Bosnia And Herzegovina. PMID:23322947

  19. Twenty years of society of medical informatics of b&h and the journal acta informatica medica.

    PubMed

    Masic, Izet

    2012-03-01

    In 2012, Health/Medical informatics profession celebrates five jubilees in Bosnia and Herzegovina: a) Thirty five years from the introduction of the first automatic manipulation of data; b) Twenty five years from establishing Society for Medical Informatics BiH; c) Twenty years from establishing scientific and professional journal of the Society for Medical Informatics of Bosnia and Herzegovina "Acta Informatica Medica"; d) Twenty years from establishing first Cathdra for Medical Informatics on biomedical faculties in Bosnia and Herzegovina and e) Ten years from the introduction of "Distance learning" in medical curriculum. All of the five mentioned activities in the area of Medical informatics had special importance and gave appropriate contribution in the development of Health/Medical informatics in Bosnia And Herzegovina.

  20. The Chief Clinical Informatics Officer (CCIO)

    PubMed Central

    Sengstack, Patricia; Thyvalikakath, Thankam Paul; Poikonen, John; Middleton, Blackford; Payne, Thomas; Lehmann, Christoph U

    2016-01-01

    Summary Introduction The emerging operational role of the “Chief Clinical Informatics Officer” (CCIO) remains heterogeneous with individuals deriving from a variety of clinical settings and backgrounds. The CCIO is defined in title, responsibility, and scope of practice by local organizations. The term encompasses the more commonly used Chief Medical Informatics Officer (CMIO) and Chief Nursing Informatics Officer (CNIO) as well as the rarely used Chief Pharmacy Informatics Officer (CPIO) and Chief Dental Informatics Officer (CDIO). Background The American Medical Informatics Association (AMIA) identified a need to better delineate the knowledge, education, skillsets, and operational scope of the CCIO in an attempt to address the challenges surrounding the professional development and the hiring processes of CCIOs. Discussion An AMIA task force developed knowledge, education, and operational skillset recommendations for CCIOs focusing on the common core aspect and describing individual differences based on Clinical Informatics focus. The task force concluded that while the role of the CCIO currently is diverse, a growing body of Clinical Informatics and increasing certification efforts are resulting in increased homogeneity. The task force advised that 1.) To achieve a predictable and desirable skillset, the CCIO must complete clearly defined and specified Clinical Informatics education and training. 2.) Future education and training must reflect the changing body of knowledge and must be guided by changing day-to-day informatics challenges. Conclusion A better defined and specified education and skillset for all CCIO positions will motivate the CCIO workforce and empower them to perform the job of a 21st century CCIO. Formally educated and trained CCIOs will provide a competitive advantage to their respective enterprise by fully utilizing the power of Informatics science. PMID:27081413

  1. The Chief Clinical Informatics Officer (CCIO): AMIA Task Force Report on CCIO Knowledge, Education, and Skillset Requirements.

    PubMed

    Kannry, Joseph; Sengstack, Patricia; Thyvalikakath, Thankam Paul; Poikonen, John; Middleton, Blackford; Payne, Thomas; Lehmann, Christoph U

    2016-01-01

    The emerging operational role of the "Chief Clinical Informatics Officer" (CCIO) remains heterogeneous with individuals deriving from a variety of clinical settings and backgrounds. The CCIO is defined in title, responsibility, and scope of practice by local organizations. The term encompasses the more commonly used Chief Medical Informatics Officer (CMIO) and Chief Nursing Informatics Officer (CNIO) as well as the rarely used Chief Pharmacy Informatics Officer (CPIO) and Chief Dental Informatics Officer (CDIO). The American Medical Informatics Association (AMIA) identified a need to better delineate the knowledge, education, skillsets, and operational scope of the CCIO in an attempt to address the challenges surrounding the professional development and the hiring processes of CCIOs. An AMIA task force developed knowledge, education, and operational skillset recommendations for CCIOs focusing on the common core aspect and describing individual differences based on Clinical Informatics focus. The task force concluded that while the role of the CCIO currently is diverse, a growing body of Clinical Informatics and increasing certification efforts are resulting in increased homogeneity. The task force advised that 1.) To achieve a predictable and desirable skillset, the CCIO must complete clearly defined and specified Clinical Informatics education and training. 2.) Future education and training must reflect the changing body of knowledge and must be guided by changing day-to-day informatics challenges. A better defined and specified education and skillset for all CCIO positions will motivate the CCIO workforce and empower them to perform the job of a 21st century CCIO. Formally educated and trained CCIOs will provide a competitive advantage to their respective enterprise by fully utilizing the power of Informatics science.

  2. Challenge and Opportunity: Educating Liberia's War Affected Children in Primary Grades

    ERIC Educational Resources Information Center

    Weah, Wokie

    2004-01-01

    Liberia's emergence from 15 years of tremendous upheaval has left it with an unsettled domestic security situation, a disrupted formal economy and a virtually destroyed national infrastructure. Faced with a burgeoning school-age population and the growing demand for education, Liberia's new National Transitional Government--composed of rebel,…

  3. Identity federation in OpenStack - an introduction to hybrid clouds

    NASA Astrophysics Data System (ADS)

    Denis, Marek; Castro Leon, Jose; Ormancey, Emmanuel; Tedesco, Paolo

    2015-12-01

    We are evaluating cloud identity federation available in the OpenStack ecosystem that allows for on premise bursting into remote clouds with use of local identities (i.e. domain accounts). Further enhancements to identity federation are a clear way to hybrid cloud architectures - virtualized infrastructures layered across independent private and public clouds.

  4. 76 FR 35872 - Commission Information Collection Activities (Ferc-603); Comment Request; Submitted for OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-20

    .... Critical infrastructure means existing and proposed systems and assets, whether physical or virtual, the....ferc.gov/help/submission-guide.asp . To file the document electronically, access the Commission's Web... using the ``eLibrary'' link. For user assistance, contact [email protected] or toll-free at...

  5. Virtual Office Hours as Cyberinfrastructure: The Case Study of Instant Messaging

    ERIC Educational Resources Information Center

    Balayeva, Jeren; Quan-Haase, Anabel

    2009-01-01

    Although out-of-class communication enhances students' learning experience, students' use of office hours has been limited. As the learning infrastructures of the social sciences and humanities have undergone a range of changes since the diffusion of digital networks, new opportunities emerge to increase out-of-class communication. Hence, it is…

  6. A Role-Playing Virtual World for Web-Based Application Courses

    ERIC Educational Resources Information Center

    Depradine, Colin

    2007-01-01

    With the rapid development of the information communication and technology (ICT) infrastructure in the Caribbean, there is an increasing demand for skilled software developers to meet the ICT needs of the region. Consequently, the web-based applications course offered at the University of the West Indies, has been redeveloped. One major part of…

  7. The Impact of New Information Technology on Bureaucratic Organizational Culture

    ERIC Educational Resources Information Center

    Givens, Mark A.

    2011-01-01

    Virtual work environments (VWEs) have been used in the private sector for more than a decade, but the United States Marine Corps (USMC), as a whole, has not yet taken advantage of associated benefits. The USMC construct parallels the bureaucratic organizational culture and uses an antiquated information technology (IT) infrastructure. During an…

  8. The Importance of Distributed Broadband Networks to Academic Biomedical Research and Education Programs

    ERIC Educational Resources Information Center

    Yellowlees, Peter M.; Hogarth, Michael; Hilty, Donald M.

    2006-01-01

    Objective: This article highlights the importance of distributed broadband networks as part of the core infrastructure necessary to deliver academic research and education programs. Method: The authors review recent developments in the field and present the University of California, Davis, environment as a case study of a future virtual regional…

  9. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  10. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  11. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  12. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  13. Sustainability Through Technology Licensing and Commercialization: Lessons Learned from the TRIAD Project

    PubMed Central

    Payne, Philip R.O.

    2014-01-01

    Ongoing transformation relative to the funding climate for healthcare research programs housed in academic and non-profit research organizations has led to a new (or renewed) emphasis on the pursuit of non-traditional sustainability models. This need is often particularly acute in the context of data management and sharing infrastructure that is developed under the auspices of such research initiatives. One option for achieving sustainability of such data management and sharing infrastructure is the pursuit of technology licensing and commercialization, in an effort to establish public-private or equivalent partnerships that sustain and even expand upon the development and dissemination of research-oriented data management and sharing technologies. However, the critical success factors for technology licensing and commercialization efforts are often unknown to individuals outside of the private sector, thus making this type of endeavor challenging to investigators in academic and non-profit settings. In response to such a gap in knowledge, this article will review a number of generalizable lessons learned from an effort undertaken at The Ohio State University to commercialize a prototypical research-oriented data management and sharing infrastructure, known as the Translational Research Informatics and Data Management (TRIAD) Grid. It is important to note that the specific emphasis of these lessons learned is on the early stages of moving a technology from the research setting into a private-sector entity and as such are particularly relevant to academic investigators interested in pursuing such activities. PMID:25848609

  14. From Dyadic Ties to Information Infrastructures: Care-Coordination between Patients, Providers, Students and Researchers. Contribution of the Health Informatics Education Working Group.

    PubMed

    Purkayastha, S; Price, A; Biswas, R; Jai Ganesh, A U; Otero, P

    2015-08-13

    To share how an effectual merging of local and online networks in low resource regions can supplement and strengthen the local practice of patient centered care through the use of an online digital infrastructure powered by all stakeholders in healthcare. User Driven Health Care offers the dynamic integration of patient values and evidence based solutions for improved medical communication in medical care. This paper conceptualizes patient care-coordination through the lens of engaged stakeholders using digital infrastructures tools to integrate information technology. We distinguish this lens from the prevalent conceptualization of dyadic ties between clinician-patient, patient-nurse, clinician-nurse, and offer the holistic integration of all stakeholder inputs, in the clinic and augmented by online communication in a multi-national setting. We analyze an instance of the user-driven health care (UDHC), a network of providers, patients, students and researchers working together to help manage patient care. The network currently focuses on patients from LMICs, but the provider network is global in reach. We describe UDHC and its opportunities and challenges in care-coordination to reduce costs, bring equity, and improve care quality and share evidence. UDHC has resulted in coordinated global based local care, affecting multiple facets of medical practice. Shared information resources between providers with disparate knowledge, results in better understanding by patients, unique and challenging cases for students, innovative community based research and discovery learning for all.

  15. The Electronic Data Methods (EDM) forum for comparative effectiveness research (CER).

    PubMed

    Holve, Erin; Segal, Courtney; Lopez, Marianne Hamilton; Rein, Alison; Johnson, Beth H

    2012-07-01

    AcademyHealth convened the Electronic Data Methods (EDM) Forum to collect, synthesize, and share lessons from eleven projects that are building infrastructure and using electronic clinical data for comparative effectiveness research (CER) and patient-centered outcomes research (PCOR). This paper provides a brief review of participating projects and provides a framework of common challenges. EDM Forum staff conducted a text review of relevant grant programs' funding opportunity announcements; projects' research plans; and available information on projects' websites. Additional information was obtained from presentations provided by each project; phone calls with project principal investigators, affiliated partners, and staff from the Agency for Healthcare Research and Quality (AHRQ); and six site visits. Projects participating in the EDM Forum are building infrastructure and developing innovative strategies to address a set of methodological, and data and informatics challenges, here identified in a common framework. The eleven networks represent more than 20 states and include a range of partnership models. Projects vary substantially in size, from 11,000 to more than 7.5 million individuals. Nearly all of the AHRQ priority populations and conditions are addressed. In partnership with the projects, the EDM Forum is focused on identifying and sharing lessons learned to advance the national dialogue on the use of electronic clinical data to conduct CER and PCOR. These efforts have the shared goal of addressing challenges in traditional research studies and data sources, and aim to build infrastructure and generate evidence to support a learning health care system that can improve patient outcomes.

  16. A virtual therapeutic environment with user projective agents.

    PubMed

    Ookita, S Y; Tokuda, H

    2001-02-01

    Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.

  17. A Foundation for Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.

    PubMed

    Roth, Christopher J; Lannum, Louis M; Persons, Kenneth R

    2016-10-01

    Care providers today routinely obtain valuable clinical multimedia with mobile devices, scope cameras, ultrasound, and many other modalities at the point of care. Image capture and storage workflows may be heterogeneous across an enterprise, and as a result, they often are not well incorporated in the electronic health record. Enterprise Imaging refers to a set of strategies, initiatives, and workflows implemented across a healthcare enterprise to consistently and optimally capture, index, manage, store, distribute, view, exchange, and analyze all clinical imaging and multimedia content to enhance the electronic health record. This paper is intended to introduce Enterprise Imaging as an important initiative to clinical and informatics leadership, and outline its key elements of governance, strategy, infrastructure, common multimedia content, acquisition workflows, enterprise image viewers, and image exchange services.

  18. Using the LOINC Semantic Structure to Integrate Community-based Survey Items into a Concept-based Enterprise Data Dictionary to Support Comparative Effectiveness Research

    PubMed Central

    Co, Manuel C.; Boden-Albala, Bernadette; Quarles, Leigh; Wilcox, Adam; Bakken, Suzanne

    2012-01-01

    In designing informatics infrastructure to support comparative effectiveness research (CER), it is necessary to implement approaches for integrating heterogeneous data sources such as clinical data typically stored in clinical data warehouses and those that are normally stored in separate research databases. One strategy to support this integration is the use of a concept-oriented data dictionary with a set of semantic terminology models. The aim of this paper is to illustrate the use of the semantic structure of Clinical LOINC (Logical Observation Identifiers, Names, and Codes) in integrating community-based survey items into the Medical Entities Dictionary (MED) to support the integration of survey data with clinical data for CER studies. PMID:24199059

  19. Methods for examining data quality in healthcare integrated data repositories.

    PubMed

    Huser, Vojtech; Kahn, Michael G; Brown, Jeffrey S; Gouripeddi, Ramkiran

    2018-01-01

    This paper summarizes content of the workshop focused on data quality. The first speaker (VH) described data quality infrastructure and data quality evaluation methods currently in place within the Observational Data Science and Informatics (OHDSI) consortium. The speaker described in detail a data quality tool called Achilles Heel and latest development for extending this tool. Interim results of an ongoing Data Quality study within the OHDSI consortium were also presented. The second speaker (MK) described lessons learned and new data quality checks developed by the PEDsNet pediatric research network. The last two speakers (JB, RG) described tools developed by the Sentinel Initiative and University of Utah's service oriented framework. The workshop discussed at the end and throughout how data quality assessment can be advanced by combining best features of each network.

  20. The impact of Life Science Identifier on informatics data.

    PubMed

    Martin, Sean; Hohman, Moses M; Liefeld, Ted

    2005-11-15

    Since the Life Science Identifier (LSID) data identification and access standard made its official debut in late 2004, several organizations have begun to use LSIDs to simplify the methods used to uniquely name, reference and retrieve distributed data objects and concepts. In this review, the authors build on introductory work that describes the LSID standard by documenting how five early adopters have incorporated the standard into their technology infrastructure and by outlining several common misconceptions and difficulties related to LSID use, including the impact of the byte identity requirement for LSID-identified objects and the opacity recommendation for use of the LSID syntax. The review describes several shortcomings of the LSID standard, such as the lack of a specific metadata standard, along with solutions that could be addressed in future revisions of the specification.

Top