Sample records for warehousing

  1. Data warehousing with Oracle

    NASA Astrophysics Data System (ADS)

    Shahzad, Muhammad A.

    1999-02-01

    With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.

  2. Data warehousing in molecular biology.

    PubMed

    Schönbach, C; Kowalski-Saunders, P; Brusic, V

    2000-05-01

    In the business and healthcare sectors data warehousing has provided effective solutions for information usage and knowledge discovery from databases. However, data warehousing applications in the biological research and development (R&D) sector are lagging far behind. The fuzziness and complexity of biological data represent a major challenge in data warehousing for molecular biology. By combining experiences in other domains with our findings from building a model database, we have defined the requirements for data warehousing in molecular biology.

  3. 7 CFR 250.14 - Warehousing, distribution and storage of donated foods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2013-01-01 2013-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...

  4. 7 CFR 250.14 - Warehousing, distribution and storage of donated foods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2014-01-01 2014-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...

  5. 7 CFR 250.14 - Warehousing, distribution and storage of donated foods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2012-01-01 2012-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...

  6. [Medical data warehousing as a generator of system component for decision support in health care].

    PubMed

    Catibusić, Sulejman; Hadzagić-Catibusić, Feriha; Zubcević, Smail

    2004-01-01

    Growth in role of data warehousing as strategic information for decision makers is significant. Many health institutions have data warehouse implementations in process of development or even in production. This article was made with intention to improve general understanding of data warehousing requirements form the point of view of end-users, and information system as well. For that reason, in this document advantages and arguments for implementation, techniques and methods of data warehousing, data warehouse foundation and exploration of information as final product of data warehousing process have been described.

  7. 27 CFR 6.44 - Free warehousing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... warehousing. The furnishing of free warehousing by delaying delivery of distilled spirits, wine, or malt... extended, is the furnishing of a service or thing of value within the meaning of the Act. ...

  8. Enhancements for a Dynamic Data Warehousing and Mining System for Large-Scale HSCB Data

    DTIC Science & Technology

    2016-04-21

    Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Progress Report No. 1 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale

  9. Data warehousing leads to improved business performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R.

    1995-09-01

    Data warehousing is emerging as one of the most significant trends in information technology (IT) during the 1990s. According to William H. Inmon, sometimes referred to as the father of data warehousing, a data warehouse is a subject-oriented, integrated, nonvolatile, time-variant collection of data organized to support management needs. Data warehousing can: provide integrated, historical and operational data; integrate disparate application systems; and organize and store data for informational, analytical processing. Data warehousing offers opportunity to address today`s problems of realizing a return on massive investments being made in acquiring and managing E and P data. Effective implementations require anmore » understanding of business benefits being sought and an adaptive, flexible IT architecture for supporting processes and technologies involved. As national E and P data archives continue to emerge and complement existing data reserves within E and P companies, expect to see increased data warehousing use to merge these two environments.« less

  10. Warehousing Operations.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    Developed as part of the Marine Corps Institute (MCI) correspondence training program, this course on warehousing operations is designed to provide instruction in the procedures used in warehousing operations. Introductory materials include specific information for MCI students and a study guide (guidelines to complete the course). The 22-hour…

  11. Spatial dynamics of warehousing and distribution in California : METRANS UTC draft 15-27.

    DOT National Transportation Integrated Search

    2017-01-01

    The purpose of this research is to document and analyze the location patterns of warehousing and distribution activity in California. The growth of California's warehousing and distribution (W&D) activities and their spatial patterns is affected by s...

  12. 7 CFR 250.14 - Warehousing, distribution and storage of donated foods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... warehousing and distributing commodities under their current system with the cost of comparable services under... warehousing and distribution services, the distributing agency shall indicate this in its cost comparison... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION DONATION OF...

  13. 7 CFR 250.14 - Warehousing, distribution and storage of donated foods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... warehousing and distributing commodities under their current system with the cost of comparable services under... warehousing and distribution services, the distributing agency shall indicate this in its cost comparison... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION DONATION OF...

  14. Warehoused Apartments/Warehoused People.

    ERIC Educational Resources Information Center

    Coalition for the Homeless, New York, NY.

    Between 45,000 and 90,000 habitable New York City apartments are being kept deliberately vacant ("warehoused") by the speculators who own them. Most of these apartments have reasonable rents, affordable by middle- and low-income families. Meanwhile, the housing crisis for poor New Yorkers has grown steadily worse. As many as 75,000…

  15. 29 CFR 780.209 - Packing, storage, warehousing, and sale of nursery products.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Packing, storage, warehousing, and sale of nursery products... FAIR LABOR STANDARDS ACT Agriculture as It Relates to Specific Situations Nursery and Landscaping Operations § 780.209 Packing, storage, warehousing, and sale of nursery products. Employees of a grower of...

  16. A virtual intranet and data-warehousing for healthcare co-operation.

    PubMed

    Kerkri, E M; Quantin, C; Grison, T; Allaert, F A; Tchounikine, A; Yétongnon, K

    2001-01-01

    As patient's medical data is disseminated in different health structures, developing a medical or epidemiological patient-oriented data warehouse has some specific requirements compared to intra healthcare structure data-warehousing projects. The difference is that the healthcare structures implicated in a patient-oriented data warehouse project require some considerations about the confidentiality of the patient data and of the activities of healthcare structures. Building a data-warehousing system at a regional level, for example in cancerology, requires the participation of all concerned health structures, as well as different health professionals. The heterogeneity of sources medical data of has to be taken into account for choosing between several organizational configurations of the data warehousing system. In top of data warehousing, we propose a concept of Virtual Intranet, which provides a solution to the problem of medical information security arising from heterogeneous sources.

  17. Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data

    DTIC Science & Technology

    2016-08-29

    Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 5 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...System for Large-scale HSCB Data Monthly Report No. 5 Reporting Period: July 20, 2016 – Aug 19, 2016 Contract No. N00014-16-P-3014

  18. Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data

    DTIC Science & Technology

    2016-07-20

    Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 4 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...including Top Videos, Top Users, Top Words, and Top Languages, and also applied NER to the text associated with YouTube posts. We have also developed UI for

  19. Enhancements for a Dynamic Data Warehousing and Mining System for Large-Scale HSCB Data

    DTIC Science & Technology

    2016-07-20

    Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 4 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...including Top Videos, Top Users, Top Words, and Top Languages, and also applied NER to the text associated with YouTube posts. We have also developed UI for

  20. Enhancements for a Dynamic Data Warehousing and Mining System for Large-Scale Human Social Cultural Behavioral (HSBC) Data

    DTIC Science & Technology

    2016-09-26

    Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Enhancements for a Dynamic Data Warehousing and Mining System for N00014-16-P-3014 Large-Scale Human Social Cultural Behavioral (HSBC) Data 5b. GRANT NUMBER...Representative Media Gallery View. We perform Scraawl’s NER algorithm to the text associated with YouTube post, which classifies the named entities into

  1. Transportation and Warehousing Sector (NAICS 48-49)

    EPA Pesticide Factsheets

    Find EPA regulatory information for the transportation and warehousing, including NESHAPs for RICE and gasoline dispensing facilities, effluent guidelines, power wash discharges, and border and port compliance

  2. 19 CFR 144.1 - Merchandise eligible for warehousing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... may be entered for warehousing except for perishable merchandise and explosive substances (other than firecrackers). Dangerous and highly flammable merchandise, though not classified as explosive, shall not be...

  3. Clinical data warehousing for evidence based decision making.

    PubMed

    Narra, Lekha; Sahama, Tony; Stapleton, Peta

    2015-01-01

    Large volumes of heterogeneous health data silos pose a big challenge when exploring for information to allow for evidence based decision making and ensuring quality outcomes. In this paper, we present a proof of concept for adopting data warehousing technology to aggregate and analyse disparate health data in order to understand the impact various lifestyle factors on obesity. We present a practical model for data warehousing with detailed explanation which can be adopted similarly for studying various other health issues.

  4. Visual Modelling of Data Warehousing Flows with UML Profiles

    NASA Astrophysics Data System (ADS)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  5. Considerations on bringing warehoused HCV patients into active care following interferon-free, direct-acting antiviral drug approval.

    PubMed

    Palak, Aleksandra; Livoti, Christine; Audibert, Céline

    2017-05-01

    Until recently, lack of efficacious and tolerable hepatitis C virus (HCV) treatments prompted patient warehousing until better treatment options became available. We investigated whether the introduction of ledipasvir/sofosbuvir precipitated patient return to clinics, thereby changing HCV clinic dynamics. Online questionnaire responses indicated the volume of HCV patients followed, the proportion of warehoused patients and those who were proactively offered new options, methods for identifying and contacting patients, and insurance authorization/reimbursement-related information. Of 168 practices surveyed, 19% indicated no patient warehousing in the previous 3 years; 81% had warehoused 40% of patients; 92% were able to handle their patient load; and 82% had not changed practices to accommodate more HCV patients in the previous 12 months. Of the 35% of patients who were ledipasvir/sofosbuvir-eligible, 50% already completed/are completing therapy, 21% were not treated due to insurance denial, and 19% were awaiting responses from insurance companies. Launch of a new treatment did not overburden HCV practices. Patients eligible to receive new treatments were being treated, but pre-authorization processes and reimbursement denials reduced the numbers of treated patients.

  6. Motor Freight Transportation and Warehousing Survey 1995

    DOT National Transportation Integrated Search

    1997-02-01

    This report presents the results from the 1995 Motor Freight Transportation and : Warehousing Survey. This annual sample survey represents all employer firms : with one or more establishments that are primarily engaged in providing : commercial motor...

  7. Application of the medical data warehousing architecture EPIDWARE to epidemiological follow-up: data extraction and transformation.

    PubMed

    Kerkri, E; Quantin, C; Yetongnon, K; Allaert, F A; Dusserre, L

    1999-01-01

    In this paper, we present an application of EPIDWARE, medical data warehousing architecture, to our epidemiological follow-up project. The aim of this project is to extract and regroup information from various information systems for epidemiological studies. We give a description of the requirements of the epidemiological follow-up project such as anonymity of medical data information and data file linkage procedure. We introduce the concept of Data Warehousing Architecture. The particularities of data extraction and transformation are presented and discussed.

  8. 77 FR 62260 - Niles America Wintech, Inc., Warehousing Division, a Valeo Company, Including On-Site Leased...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    ..., Adecco Employment Services, Winchester, KY; Niles America Wintech, Inc., Assembly and Testing Division, a... former workers of Niles America Wintech, Inc., Warehousing Division and Assembly and Testing Division...

  9. The cornerstone of data warehousing for government applications

    NASA Technical Reports Server (NTRS)

    Kenbeek, Doug; Rothschild, Jack

    1996-01-01

    The purpose of this paper is to discuss data warehousing storage issues and the impact of EMC open storage technology for meeting the myriad of challenges government organizations face when building Decision Support/Data Warehouse system.

  10. Structural and geographic shifts in the Washington warehousing industry : transportation impacts for the Green River Valley.

    DOT National Transportation Integrated Search

    2009-07-01

    Establishment level employment data indicate that the warehousing industry has experienced rapid growth and : restructuring since 1998. This restructuring has resulted in geographic shifts at the national, regional, and local scales. : Uneven growth ...

  11. Warehousing Competency Profile. Apprenticeship Training.

    ERIC Educational Resources Information Center

    Alberta Learning, Edmonton. Apprenticeship and Industry Training.

    This document presents information about the apprenticeship training program of Alberta, Canada, in general and the warehousing program in particular. The first part of the document discusses the following items: Alberta's apprenticeship and industry training system; the apprenticeship and industry training committee structure; local…

  12. Data Warehousing: Beyond Disaggregation.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Boston, Carol

    2003-01-01

    Discusses data warehousing, which provides information more fully responsive to local, state, and federal data needs. Such a system allows educators to generate reports and analyses that supply information, provide accountability, explore relationships among different kinds of data, and inform decision-makers. (Contains one figure and eight…

  13. 21 CFR 211.142 - Warehousing procedures.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 4 2013-04-01 2013-04-01 false Warehousing procedures. 211.142 Section 211.142 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Holding and Distribution § 211...

  14. 21 CFR 211.142 - Warehousing procedures.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 4 2012-04-01 2012-04-01 false Warehousing procedures. 211.142 Section 211.142 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Holding and Distribution § 211...

  15. 21 CFR 211.142 - Warehousing procedures.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Warehousing procedures. 211.142 Section 211.142 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Holding and Distribution § 211...

  16. 21 CFR 211.142 - Warehousing procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Warehousing procedures. 211.142 Section 211.142 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Holding and Distribution § 211...

  17. 21 CFR 211.142 - Warehousing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 4 2014-04-01 2014-04-01 false Warehousing procedures. 211.142 Section 211.142 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Holding and Distribution § 211...

  18. 21 CFR 110.93 - Warehousing and distribution.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING PRACTICE IN MANUFACTURING, PACKING, OR HOLDING... microbial contamination as well as against deterioration of the food and the container. ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Warehousing and distribution. 110.93 Section 110...

  19. Using data warehousing and OLAP in public health care.

    PubMed

    Hristovski, D; Rogac, M; Markota, M

    2000-01-01

    The paper describes the possibilities of using data warehousing and OLAP technologies in public health care in general and then our own experience with these technologies gained during the implementation of a data warehouse of outpatient data at the national level. Such a data warehouse serves as a basis for advanced decision support systems based on statistical, OLAP or data mining methods. We used OLAP to enable interactive exploration and analysis of the data. We found out that data warehousing and OLAP are suitable for the domain of public health and that they enable new analytical possibilities in addition to the traditional statistical approaches.

  20. Using data warehousing and OLAP in public health care.

    PubMed Central

    Hristovski, D.; Rogac, M.; Markota, M.

    2000-01-01

    The paper describes the possibilities of using data warehousing and OLAP technologies in public health care in general and then our own experience with these technologies gained during the implementation of a data warehouse of outpatient data at the national level. Such a data warehouse serves as a basis for advanced decision support systems based on statistical, OLAP or data mining methods. We used OLAP to enable interactive exploration and analysis of the data. We found out that data warehousing and OLAP are suitable for the domain of public health and that they enable new analytical possibilities in addition to the traditional statistical approaches. PMID:11079907

  1. Criteria for Centralized Warehousing Procedures in Public School Districts. Summary Report.

    ERIC Educational Resources Information Center

    Forsythe, Ralph A.; Thomson, Leland A.

    This survey of opinions of architects, certified public accountants, and educators (who have written concerning, shown leadership in, or have specialized knowledge about warehousing) covers the planning, organizing, material handling, and paper processing of presently operated school district central warehouses. All recommendations concerning…

  2. 77 FR 58112 - Notice of Intent To Prepare an Environmental Assessment (EA) for the Proposed Conveyance of Land...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ..., including warehousing and distribution; research and development; technology manufacturing; food processing... warehousing and distribution; research and development; technology manufacturing; food processing and... defense manufacturing, sensor manufacturing, or medical devices; (iv) Food/Agriculture--such as wine, food...

  3. Data warehousing as a basis for web-based documentation of data mining and analysis.

    PubMed

    Karlsson, J; Eklund, P; Hallgren, C G; Sjödin, J G

    1999-01-01

    In this paper we present a case study for data warehousing intended to support data mining and analysis. We also describe a prototype for data retrieval. Further we discuss some technical issues related to a particular choice of a patient record environment.

  4. 22 CFR 124.14 - Exports to warehouses or distribution points outside the United States.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...., contracts) between U.S. persons and foreign persons for the warehousing and distribution of defense articles.... (b) Required information. Proposed warehousing and distribution agreements (and amendments thereto... military nomenclature, the Federal stock number, nameplate data, and any control numbers under which the...

  5. 22 CFR 124.14 - Exports to warehouses or distribution points outside the United States.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...., contracts) between U.S. persons and foreign persons for the warehousing and distribution of defense articles.... (b) Required information. Proposed warehousing and distribution agreements (and amendments thereto... military nomenclature, the Federal stock number, nameplate data, and any control numbers under which the...

  6. 22 CFR 124.14 - Exports to warehouses or distribution points outside the United States.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...., contracts) between U.S. persons and foreign persons for the warehousing and distribution of defense articles.... (b) Required information. Proposed warehousing and distribution agreements (and amendments thereto... military nomenclature, the Federal stock number, nameplate data, and any control numbers under which the...

  7. Intelligent Information Retrieval and Web Mining Architecture Using SOA

    ERIC Educational Resources Information Center

    El-Bathy, Naser Ibrahim

    2010-01-01

    The study of this dissertation provides a solution to a very specific problem instance in the area of data mining, data warehousing, and service-oriented architecture in publishing and newspaper industries. The research question focuses on the integration of data mining and data warehousing. The research problem focuses on the development of…

  8. IMPACTS OF TECHNOLOGICAL CHANGES IN WAREHOUSING, PHASE I.

    ERIC Educational Resources Information Center

    HAMILTON, PHYLLIS D.; KINCAID, HARRY V.

    THE OBJECTIVES OF THIS STUDY WERE (1) TO DETERMINE THE AVAILABILITY, NATURE, AND RELIABILITY OF DATA ON THE RAPID CHANGE IN THE WAREHOUSING FUNCTION IN INDUSTRY AND (2) TO PROVIDE A BASIS FOR DECISIONS CONCERNING THE DESIRABILITY AND FEASIBILITY OF CONDUCTING SUBSEQUENT STUDIES. THREE MAJOR SOURCES OF INFORMATION ON CALIFORNIA, OREGON, WASHINGTON,…

  9. We've Got Plenty of Data, Now How Can We Use It?

    ERIC Educational Resources Information Center

    Weiler, Jeffrey K.; Mears, Robert L.

    1999-01-01

    To mine a large store of school data, a new technology (variously termed data warehousing, data marts, online analytical processing, and executive information systems) is emerging. Data warehousing helps school districts extract and restructure desired data from automated systems and create new databases designed to enhance analytical and…

  10. 76 FR 10328 - Grant of Authority for Subzone Status; Vestas Nacelles America, Inc. (Wind Turbine Nacelles, Hubs...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-24

    ... Status; Vestas Nacelles America, Inc. (Wind Turbine Nacelles, Hubs, Blades and Towers), Brighton, Denver...-purpose subzone at the wind turbine nacelle, hub, blade and tower manufacturing and warehousing facilities... status for activity related to the manufacturing and warehousing of wind turbine nacelles, hubs, blades...

  11. Marketing and Distributive Education Curriculum Guide: Transportation and Warehousing.

    ERIC Educational Resources Information Center

    Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.

    Designed to be used with the General Marketing Curriculum Planning Guide (ED 156 860), this guide is intended to provide the curriuclum coordinator with a basis for planning a comprehensive program in the field of marketing for transportation and warehousing. It contains job competency sheets in eight instructional areas: (1) communications, (2)…

  12. Development of a statewide transportation data warehousing and mining system under the Louisiana Transportation Information System (LATIS) Program.

    DOT National Transportation Integrated Search

    2009-04-01

    The objectives of this study are to assess whether introducing a data warehousing/data mining system in Louisiana would be feasible and beneficial. The study sets out to identify the features of the most suitable system for the state as well as to ou...

  13. 12 CFR 584.2-1 - Prescribed services and activities of savings and loan holding companies.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... warehousing of such real estate loans, except that such a company or subsidiary shall not invest in a loan... installed therein), including brokerage and warehousing of such chattel paper; (iii) Loans, with or without... other multiple holding companies and affiliates thereof: (i) Data processing; (ii) Credit information...

  14. 12 CFR 584.2-1 - Prescribed services and activities of savings and loan holding companies.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... warehousing of such real estate loans, except that such a company or subsidiary shall not invest in a loan... installed therein), including brokerage and warehousing of such chattel paper; (iii) Loans, with or without... other multiple holding companies and affiliates thereof: (i) Data processing; (ii) Credit information...

  15. 12 CFR 584.2-1 - Prescribed services and activities of savings and loan holding companies.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... warehousing of such real estate loans, except that such a company or subsidiary shall not invest in a loan... installed therein), including brokerage and warehousing of such chattel paper; (iii) Loans, with or without... other multiple holding companies and affiliates thereof: (i) Data processing; (ii) Credit information...

  16. A Realistic Data Warehouse Project: An Integration of Microsoft Access[R] and Microsoft Excel[R] Advanced Features and Skills

    ERIC Educational Resources Information Center

    King, Michael A.

    2009-01-01

    Business intelligence derived from data warehousing and data mining has become one of the most strategic management tools today, providing organizations with long-term competitive advantages. Business school curriculums and popular database textbooks cover data warehousing, but the examples and problem sets typically are small and unrealistic. The…

  17. 75 FR 5283 - Foreign-Trade Zone 123 - Denver, Colorado, Application for Subzone, Vestas Nacelles America, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-02

    ..., Colorado, Application for Subzone, Vestas Nacelles America, Inc. (Wind Turbine Nacelles, Hubs, Blades and...-purpose subzone status for the wind turbine nacelle, hub, blade and tower manufacturing and warehousing... warehousing of wind turbines and related parts (up to 1,560 nacelles and hubs, 4,200 blades, and 1,100 towers...

  18. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    ERIC Educational Resources Information Center

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  19. Information integration for a sky survey by data warehousing

    NASA Astrophysics Data System (ADS)

    Luo, A.; Zhang, Y.; Zhao, Y.

    The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line

  20. Getting Started with Data Warehousing: The First in a Series on How to Manage Data Efficiently

    ERIC Educational Resources Information Center

    Mills, Lane B.

    2008-01-01

    These days, "data-driven decision making" is on every school district's buzzword bingo game board. Accountability pressures and lean budgets make translating data into information a major focus of school systems that are trying to improve district outcomes in all areas. As such, data warehousing has become an essential district tool. Historically…

  1. Navigating legal constraints in clinical data warehousing: a case study in personalized medicine.

    PubMed

    Jefferys, Benjamin R; Nwankwo, Iheanyi; Neri, Elias; Chang, David C W; Shamardin, Lev; Hänold, Stefanie; Graf, Norbert; Forgó, Nikolaus; Coveney, Peter

    2013-04-06

    Personalized medicine relies in part upon comprehensive data on patient treatment and outcomes, both for analysis leading to improved models that provide the basis for enhanced treatment, and for direct use in clinical decision-making. A data warehouse is an information technology for combining and standardizing multiple databases. Data warehousing of clinical data is constrained by many legal and ethical considerations, owing to the sensitive nature of the data being stored. We describe an unconstrained clinical data warehousing architecture, some of the legal constraints that have led us to reconsider this architecture, and the legal and technical solutions to these constraints developed for the clinical data warehouse in the personalized medicine project p-medicine. We also propose some changes to the legal constraints that will further enable clinical research.

  2. The epiphany of data warehousing technologies in the pharmaceutical industry.

    PubMed

    Barrett, J S; Koprowski, S P

    2002-03-01

    The highly competitive pharmaceutical industry has seen many external changes to its landscape as companies consume each other increasing their pipelines while removing redundant functions and processes. Internally, companies have sought to streamline the discovery and development phases in an attempt to improve candidate selection and reduce the time to regulatory filing. In conjunction with efforts to screen and develop more compounds faster and more efficiently, database management systems (DBMS) have been developed for numerous groups supporting various R&D efforts. An outgrowth of DBMS evolution has been the birth of data warehousing. Often confused with DBMS, data warehousing provides a conduit for data residing across platforms, networks, and in different data structures. Through the use of metadata, the warehouse establishes connectivity of varied data stores (operational detail data, ODD) and permits identification of data ownership, location and transaction history. This evolution has closely mirrored and in some ways been driven by the electronic submission (formerly CANDA). The integration of the electronic submissions and document management with R&D data warehousing initiatives should provide a platform by which companies can address compliance with 21 CFR Part 11. Now more than ever "corporate memory" is being extended to the data itself. The when, why and how of successes and failures are constantly being probed by R&D management teams. The volume of information being generated by today's pharmaceutical companies requires mining of historical data on a routine basis. Data warehousing represents a core technology to assist in this endeavor. New initiatives in this field address the necessity of data portals through which warehouse data can be web-enabled and exploited by diverse data customers both internal and external to the company. The epiphany of data warehousing technologies within the pharmaceutical industry has begun and promises to change the way in which companies process and provide data to regulatory agencies. The improvements in drug discovery and reduction in development timelines remain to be seen but would seem to be rational if such technology is fully exploited.

  3. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musick, R.; Critchlow, T.; Ganesh, M.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data intomore » a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.« less

  4. Promotion bureau warehouse system design. Case study in University of AA

    NASA Astrophysics Data System (ADS)

    Parwati, N.; Qibtiyah, M.

    2017-12-01

    The warehouse becomes one of the important parts in an industry. By having a good warehousing system, an industry can improve the effectiveness of its performance, so that profits for the company can continue to increase. Meanwhile, if it has a poorly organized warehouse system, it is feared there will be a decrease in the level of effectiveness of the industry itself. In this research, the object was warehousing system in promotion bureau of University AA. To improve the effectiveness of warehousing system, warehouse layout design is done by specifying categories of goods based on the flow of goods in and out of warehouse with ABC analysis method. In addition, the design of information systems to assist in controlling the system to support all the demand for every burreau and department in the university.

  5. Data Model Performance in Data Warehousing

    NASA Astrophysics Data System (ADS)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  6. Navigating legal constraints in clinical data warehousing: a case study in personalized medicine

    PubMed Central

    Jefferys, Benjamin R.; Nwankwo, Iheanyi; Neri, Elias; Chang, David C. W.; Shamardin, Lev; Hänold, Stefanie; Graf, Norbert; Forgó, Nikolaus; Coveney, Peter

    2013-01-01

    Personalized medicine relies in part upon comprehensive data on patient treatment and outcomes, both for analysis leading to improved models that provide the basis for enhanced treatment, and for direct use in clinical decision-making. A data warehouse is an information technology for combining and standardizing multiple databases. Data warehousing of clinical data is constrained by many legal and ethical considerations, owing to the sensitive nature of the data being stored. We describe an unconstrained clinical data warehousing architecture, some of the legal constraints that have led us to reconsider this architecture, and the legal and technical solutions to these constraints developed for the clinical data warehouse in the personalized medicine project p-medicine. We also propose some changes to the legal constraints that will further enable clinical research. PMID:24427531

  7. A review of genomic data warehousing systems.

    PubMed

    Triplet, Thomas; Butler, Gregory

    2014-07-01

    To facilitate the integration and querying of genomics data, a number of generic data warehousing frameworks have been developed. They differ in their design and capabilities, as well as their intended audience. We provide a comprehensive and quantitative review of those genomic data warehousing frameworks in the context of large-scale systems biology. We reviewed in detail four genomic data warehouses (BioMart, BioXRT, InterMine and PathwayTools) freely available to the academic community. We quantified 20 aspects of the warehouses, covering the accuracy of their responses, their computational requirements and development efforts. Performance of the warehouses was evaluated under various hardware configurations to help laboratories optimize hardware expenses. Each aspect of the benchmark may be dynamically weighted by scientists using our online tool BenchDW (http://warehousebenchmark.fungalgenomics.ca/benchmark/) to build custom warehouse profiles and tailor our results to their specific needs.

  8. Data warehousing: toward knowledge management.

    PubMed

    Shams, K; Farishta, M

    2001-02-01

    With rapid changes taking place in the practice and delivery of health care, decision support systems have assumed an increasingly important role. More and more health care institutions are deploying data warehouse applications as decision support tools for strategic decision making. By making the right information available at the right time to the right decision makers in the right manner, data warehouses empower employees to become knowledge workers with the ability to make the right decisions and solve problems, creating strategic leverage for the organization. Health care management must plan and implement data warehousing strategy using a best practice approach. Through the power of data warehousing, health care management can negotiate bettermanaged care contracts based on the ability to provide accurate data on case mix and resource utilization. Management can also save millions of dollars through the implementation of clinical pathways in better resource utilization and changing physician behavior to best practices based on evidence-based medicine.

  9. Towards to an Oncology Database (ONCOD) using a data warehousing approach

    PubMed Central

    Wang, Xiaoming; Liu, Lili; Fackenthal, James; Chang, Paul; Newstead, Gilliam; Chmura, Steven; Foster, Ian; Olopade, Olufunmilayo I

    2012-01-01

    While data warehousing approaches have been increasingly adopted in the biomedical informatics community for individualized data integration, effectively dealing with data integration, access, and application remains a challenging issue. In this report, focusing on ontology data, we describe how to use an established data warehouse system, named TRAM, to provide a data mart layer to address this issue. Our effort has resulted in a twofold achievement: 1) a model data mart tailored to facilitate oncology data integration and application (ONCOD), and 2) a flexible system architecture that has potential to be customized to support other data marts for various major medical fields. PMID:22779060

  10. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  11. Data warehousing as a tool for quality management in oncology.

    PubMed

    Hölzer, S; Tafazzoli, A G; Altmann, U; Wächter, W; Dudeck, J

    1999-01-01

    At present, physicians are constrained by their limited skills to integrate and understand the growing amount of electronic medical information. To handle, extract, integrate, analyse and take advantage of the gathered information regarding the quality of patient care, the concept of a data warehouse seems to be especially interesting in medicine. Medical data warehousing allows the physicians to take advantage of all the operational data they have been collecting over the years. Our purpose is to build a data warehouse in order to use all available information about cancer patients. We think that with the sensible use of this tool, there are economic benefits for the Society and an improvement of quality of medical care for patients.

  12. 49 CFR 33.20 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair.... Health resources means drugs, biological products, medical devices, materials, facilities, health...

  13. 49 CFR 33.20 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair.... Health resources means drugs, biological products, medical devices, materials, facilities, health...

  14. 49 CFR 33.20 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair.... Health resources means drugs, biological products, medical devices, materials, facilities, health...

  15. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce

    PubMed Central

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2016-01-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325

  16. 7 CFR 735.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., optical, or similar means, including, but not limited to, electronic data interchange, advanced... lawfully engaged in the business of storing or handling agricultural products. Warehousing activities and...

  17. 7 CFR 735.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., optical, or similar means, including, but not limited to, electronic data interchange, advanced... lawfully engaged in the business of storing or handling agricultural products. Warehousing activities and...

  18. The design of the automated control system for warehouse equipment under radio-electronic manufacturing

    NASA Astrophysics Data System (ADS)

    Kapulin, D. V.; Chemidov, I. V.; Kazantsev, M. A.

    2017-01-01

    In the paper, the aspects of design, development and implementation of the automated control system for warehousing under the manufacturing process of the radio-electronic enterprise JSC «Radiosvyaz» are discussed. The architecture of the automated control system for warehousing proposed in the paper consists of a server which is connected to the physically separated information networks: the network with a database server, which stores information about the orders for picking, and the network with the automated storage and retrieval system. This principle allows implementing the requirements for differentiation of access, ensuring the information safety and security requirements. Also, the efficiency of the developed automated solutions in terms of optimizing the warehouse’s logistic characteristics is researched.

  19. 29 CFR 697.1 - Industry definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and transportation. This industry shall include the transportation of passengers and cargo by water or... streets, catchments, dams, and any other structure. (f) Retailing, wholesaling and warehousing. This...

  20. 29 CFR 697.1 - Industry definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and transportation. This industry shall include the transportation of passengers and cargo by water or... streets, catchments, dams, and any other structure. (f) Retailing, wholesaling and warehousing. This...

  1. 29 CFR 697.1 - Industry definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and transportation. This industry shall include the transportation of passengers and cargo by water or... streets, catchments, dams, and any other structure. (f) Retailing, wholesaling and warehousing. This...

  2. Multiagent data warehousing and multiagent data mining for cerebrum/cerebellum modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Ran

    2002-03-01

    An algorithm named Neighbor-Miner is outlined for multiagent data warehousing and multiagent data mining. The algorithm is defined in an evolving dynamic environment with autonomous or semiautonomous agents. Instead of mining frequent itemsets from customer transactions, the new algorithm discovers new agents and mining agent associations in first-order logic from agent attributes and actions. While the Apriori algorithm uses frequency as a priory threshold, the new algorithm uses agent similarity as priory knowledge. The concept of agent similarity leads to the notions of agent cuboid, orthogonal multiagent data warehousing (MADWH), and multiagent data mining (MADM). Based on agent similarities and action similarities, Neighbor-Miner is proposed and illustrated in a MADWH/MADM approach to cerebrum/cerebellum modeling. It is shown that (1) semiautonomous neurofuzzy agents can be identified for uniped locomotion and gymnastic training based on attribute relevance analysis; (2) new agents can be discovered and agent cuboids can be dynamically constructed in an orthogonal MADWH, which resembles an evolving cerebrum/cerebellum system; and (3) dynamic motion laws can be discovered as association rules in first order logic. Although examples in legged robot gymnastics are used to illustrate the basic ideas, the new approach is generally suitable for a broad category of data mining tasks where knowledge can be discovered collectively by a set of agents from a geographically or geometrically distributed but relevant environment, especially in scientific and engineering data environments.

  3. 10 CFR 217.20 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair... from: (1) A natural disaster; or (2) An accidental or human-caused event. Health resources means drugs...

  4. 10 CFR 217.20 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair... from: (1) A natural disaster; or (2) An accidental or human-caused event. Health resources means drugs...

  5. 10 CFR 217.20 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... warehousing, ports, services, equipment and facilities, such as transportation carrier shop and repair... from: (1) A natural disaster; or (2) An accidental or human-caused event. Health resources means drugs...

  6. Transportation Annual Survey 1998

    DOT National Transportation Integrated Search

    1999-12-22

    The purpose of this annual survey is to provide national estimates of revenue, expenses, and vehicle fleet inventories for commercial motor freight transportation and public warehousing service industries. The United States Code, Title 13, authorizes...

  7. 76 FR 27935 - Small Business Size Standards: Transportation and Warehousing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... within NAICS Industry Group 4883 (Support Activities for Water Transportation), in this rule, SBA also... for Water 488310, 488320, 48830, 488390. Transportation. 4884 Support Activities for Road 488410... 27936

  8. 19 CFR 151.51 - Sampling requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... warehousing at the port of first arrival, they shall be sampled for assay and moisture purposes in accordance... of any available sample, knowledge of prior importations of similar materials, and other data, the...

  9. 1992 census of transportation, communications, and utilities : subject series : miscellaneous subjects

    DOT National Transportation Integrated Search

    1996-08-01

    Provides statistics on sources of revenue from passenger, motor freight, water, and air transportation as well as pipelines, marinas, public warehousing, travel agencies, cable, radio, telephone, electric utilities, and construction.

  10. RMP Guidance for Warehouses - Appendix C: Technical Assistance

    EPA Pesticide Factsheets

    Resources to assist warehousing facilities in complying include the Chemical Emergency Preparedness and Prevention Office website, EPCRA/Superfund/RCRA/CAA hotline, OSHA publications and training program, and Institute of Chemical Engineers publications.

  11. RMP Guidance for Chemical Distributors - Appendix C: Technical Assistance

    EPA Pesticide Factsheets

    Resources available to assist warehousing facilities include the Office of Emergency Management website, EPCRA/Superfund/RCRA/CAA hotline, OSHA website and documents and training program, and American Institute of Chemical Engineers publications.

  12. Remediation System Evaluation, FCX Statesville Superfund Site

    EPA Pesticide Factsheets

    The FCX property was an agriculture distribution center that formulated, repackaged, and warehoused pesticides and fertilizers. The former Burlington Industries property to the north and upgradient of the FCX property was a textile facility.

  13. Data warehousing in disease management programs.

    PubMed

    Ramick, D C

    2001-01-01

    Disease management programs offer the benefits of lower disease occurrence, improved patient care, and lower healthcare costs. In such programs, the key mechanism used to identify individuals at risk for targeted diseases is the data warehouse. This article surveys recent warehousing techniques from HMOs to map out critical issues relating to the preparation, design, and implementation of a successful data warehouse. Discussions of scope, data cleansing, and storage management are included in depicting warehouse preparation and design; data implementation options are contrasted. Examples are provided of data warehouse execution in disease management programs that identify members with preexisting illnesses, as well as those exhibiting high-risk conditions. The proper deployment of successful data warehouses in disease management programs benefits both the organization and the member. Organizations benefit from decreased medical costs; members benefit through an improved quality of life through disease-specific care.

  14. Cancer surveillance using data warehousing, data mining, and decision support systems.

    PubMed

    Forgionne, G A; Gangopadhyay, A; Adya, M

    2000-08-01

    This article discusses how data warehousing, data mining, and decision support systems can reduce the national cancer burden or the oral complications of cancer therapies, especially as related to oral and pharyngeal cancers. An information system is presented that will deliver the necessary information technology to clinical, administrative, and policy researchers and analysts in an effective and efficient manner. The system will deliver the technology and knowledge that users need to readily: (1) organize relevant claims data, (2) detect cancer patterns in general and special populations, (3) formulate models that explain the patterns, and (4) evaluate the efficacy of specified treatments and interventions with the formulations. Such a system can be developed through a proven adaptive design strategy, and the implemented system can be tested on State of Maryland Medicaid data (which includes women, minorities, and children).

  15. A similarity-based data warehousing environment for medical images.

    PubMed

    Teixeira, Jefferson William; Annibal, Luana Peixoto; Felipe, Joaquim Cezar; Ciferri, Ricardo Rodrigues; Ciferri, Cristina Dutra de Aguiar

    2015-11-01

    A core issue of the decision-making process in the medical field is to support the execution of analytical (OLAP) similarity queries over images in data warehousing environments. In this paper, we focus on this issue. We propose imageDWE, a non-conventional data warehousing environment that enables the storage of intrinsic features taken from medical images in a data warehouse and supports OLAP similarity queries over them. To comply with this goal, we introduce the concept of perceptual layer, which is an abstraction used to represent an image dataset according to a given feature descriptor in order to enable similarity search. Based on this concept, we propose the imageDW, an extended data warehouse with dimension tables specifically designed to support one or more perceptual layers. We also detail how to build an imageDW and how to load image data into it. Furthermore, we show how to process OLAP similarity queries composed of a conventional predicate and a similarity search predicate that encompasses the specification of one or more perceptual layers. Moreover, we introduce an index technique to improve the OLAP query processing over images. We carried out performance tests over a data warehouse environment that consolidated medical images from exams of several modalities. The results demonstrated the feasibility and efficiency of our proposed imageDWE to manage images and to process OLAP similarity queries. The results also demonstrated that the use of the proposed index technique guaranteed a great improvement in query processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. This Issue: Correlates of a Defective School.

    ERIC Educational Resources Information Center

    Gilman, David Alan

    1992-01-01

    Describes correlates of defective schools: perks for very few; faulty communication; adult-centered programs; special interest group indulgence; poor professional relationships; personnel warehousing; incompetent consultants; literal interpretation of technicalities; imperial leadership; intimate relationships among personnel; incoherent…

  17. Warehousing Structured and Unstructured Data for Data Mining.

    ERIC Educational Resources Information Center

    Miller, L. L.; Honavar, Vasant; Barta, Tom

    1997-01-01

    Describes an extensible object-oriented view system that supports the integration of both structured and unstructured data sources in either the multidatabase or data warehouse environment. Discusses related work and data mining issues. (AEF)

  18. Use These Seven Checklists to Maintain Firm Control over Business Procedures.

    ERIC Educational Resources Information Center

    Scebra, J. Boyd

    1983-01-01

    Checklist for evaluating school management covers (1) budgeting, revenues, expenditures; (2) accounting and payroll; (3) purchasing and warehousing; (4) debts and captial outlay; (5) insurance; (6) property control; and (7) school activity funds. (JBM)

  19. The Impact of Technology on Users and the Workplace.

    ERIC Educational Resources Information Center

    Chan, Susy S.

    1999-01-01

    Identifies four trends in corporate information technology and applies them to the academic workplace and institutional research. Trends are (1) knowledge management, (2) enterprise resource planning, (3) data warehousing, and (4) electronic commerce. (Author/DB)

  20. 48 CFR 247.301-70 - Definition.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... logistics services. Some examples of logistics services are the management of transportation, demand forecasting, information management, inventory maintenance, warehousing, and distribution. [65 FR 50145, Aug..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT TRANSPORTATION Transportation in Supply Contracts 247.301-70...

  1. 48 CFR 247.301-70 - Definition.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... logistics services. Some examples of logistics services are the management of transportation, demand forecasting, information management, inventory maintenance, warehousing, and distribution. [65 FR 50145, Aug..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT TRANSPORTATION Transportation in Supply Contracts 247.301-70...

  2. 48 CFR 247.301-70 - Definition.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... logistics services. Some examples of logistics services are the management of transportation, demand forecasting, information management, inventory maintenance, warehousing, and distribution. [65 FR 50145, Aug..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT TRANSPORTATION Transportation in Supply Contracts 247.301-70...

  3. 48 CFR 247.301-70 - Definition.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... logistics services. Some examples of logistics services are the management of transportation, demand forecasting, information management, inventory maintenance, warehousing, and distribution. [65 FR 50145, Aug..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT TRANSPORTATION Transportation in Supply Contracts 247.301-70...

  4. 48 CFR 247.301-70 - Definition.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... logistics services. Some examples of logistics services are the management of transportation, demand forecasting, information management, inventory maintenance, warehousing, and distribution. [65 FR 50145, Aug..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT TRANSPORTATION Transportation in Supply Contracts 247.301-70...

  5. Data warehousing methods and processing infrastructure for brain recovery research.

    PubMed

    Gee, T; Kenny, S; Price, C J; Seghier, M L; Small, S L; Leff, A P; Pacurar, A; Strother, S C

    2010-09-01

    In order to accelerate translational neuroscience with the goal of improving clinical care it has become important to support rapid accumulation and analysis of large, heterogeneous neuroimaging samples and their metadata from both normal control and patient groups. We propose a multi-centre, multinational approach to accelerate the data mining of large samples and facilitate data-led clinical translation of neuroimaging results in stroke. Such data-driven approaches are likely to have an early impact on clinically relevant brain recovery while we simultaneously pursue the much more challenging model-based approaches that depend on a deep understanding of the complex neural circuitry and physiological processes that support brain function and recovery. We present a brief overview of three (potentially converging) approaches to neuroimaging data warehousing and processing that aim to support these diverse methods for facilitating prediction of cognitive and behavioral recovery after stroke, or other types of brain injury or disease.

  6. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  7. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    PubMed

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  8. 76 FR 67132 - Foreign-Trade Zone 177-Evansville, IN; Application for Manufacturing Authority; Hoosier Stamping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-31

    ... within Site 8 of FTZ 177. The facility is used for the manufacturing, testing, warehousing, packaging...: pneumatic tires, tubes, rolled rim rings, semi-pneumatic tires, herring-bone tires, welding wires and bolts...

  9. Inland empire logistics GIS mapping project.

    DOT National Transportation Integrated Search

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  10. 26 CFR 1.501(e)-1 - Cooperative hospital service organizations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... illustrated by the following example. Example. An organization performs industrial engineering services on a...-hospitals), warehousing, billing and collection, food, clinical (including radiology), industrial engineering (including the installation, maintenance and repair of biomedical and similar equipment...

  11. 26 CFR 1.501(e)-1 - Cooperative hospital service organizations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... illustrated by the following example. Example. An organization performs industrial engineering services on a...-hospitals), warehousing, billing and collection, food, clinical (including radiology), industrial engineering (including the installation, maintenance and repair of biomedical and similar equipment...

  12. 26 CFR 1.501(e)-1 - Cooperative hospital service organizations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... illustrated by the following example. Example. An organization performs industrial engineering services on a...-hospitals), warehousing, billing and collection, food, clinical (including radiology), industrial engineering (including the installation, maintenance and repair of biomedical and similar equipment...

  13. 26 CFR 1.501(e)-1 - Cooperative hospital service organizations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... illustrated by the following example. Example. An organization performs industrial engineering services on a...-hospitals), warehousing, billing and collection, food, clinical (including radiology), industrial engineering (including the installation, maintenance and repair of biomedical and similar equipment...

  14. 26 CFR 1.501(e)-1 - Cooperative hospital service organizations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... illustrated by the following example. Example. An organization performs industrial engineering services on a...-hospitals), warehousing, billing and collection, food, clinical (including radiology), industrial engineering (including the installation, maintenance and repair of biomedical and similar equipment...

  15. Design and Data Management System at Johnson Space Center, Houston, Texas

    NASA Technical Reports Server (NTRS)

    Aronoff, Raymond

    2002-01-01

    The Design and Data Management System (DDMS) Project is a cooperative effort between Engineering and Information Systems whose overall objective is to move toward an integrated approach to collecting, managing, warehousing and accessing its engineering design data.

  16. Automated Information System for School Food Services.

    ERIC Educational Resources Information Center

    Hazarika, Panna; Galligan, Stephen

    1982-01-01

    Controlling warehousing operations and food inventory, administering school cafeteria activity, and measuring the profitability of food service operations are identified as food service administrative problems. A comprehensive school food services information system developed to address these problems is described. (Author/MLF)

  17. Chemical Safety Alert: Identifying Chemical Reactivity Hazards Preliminary Screening Method

    EPA Pesticide Factsheets

    Introduces small-to-medium-sized facilities to a method developed by Center for Chemical Process Safety (CCPS), based on a series of twelve yes-or-no questions to help determine hazards in warehousing, repackaging, blending, mixing, and processing.

  18. 12 CFR 614.4170 - General.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... lender. Procedures shall require: (i) The procurement of periodic operating data essential for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii) Inspections... insurance, margin requirements, warehousing, and the prompt exercise of legal options to preserve the lender...

  19. 12 CFR 614.4170 - General.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... lender. Procedures shall require: (i) The procurement of periodic operating data essential for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii) Inspections... insurance, margin requirements, warehousing, and the prompt exercise of legal options to preserve the lender...

  20. 7 CFR 251.10 - Miscellaneous provisions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... program at the State and local level. State and local costs must be identified separately. The data must... warehousing practices, inventory controls, approval of distribution sites, reporting and recordkeeping... in which the maintenance-of-effort requirement became effective, whichever is later. (i) Data...

  1. 7 CFR 251.10 - Miscellaneous provisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... program at the State and local level. State and local costs must be identified separately. The data must... warehousing practices, inventory controls, approval of distribution sites, reporting and recordkeeping... in which the maintenance-of-effort requirement became effective, whichever is later. (i) Data...

  2. 7 CFR 251.10 - Miscellaneous provisions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... program at the State and local level. State and local costs must be identified separately. The data must... warehousing practices, inventory controls, approval of distribution sites, reporting and recordkeeping... in which the maintenance-of-effort requirement became effective, whichever is later. (i) Data...

  3. 7 CFR 251.10 - Miscellaneous provisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... program at the State and local level. State and local costs must be identified separately. The data must... warehousing practices, inventory controls, approval of distribution sites, reporting and recordkeeping... in which the maintenance-of-effort requirement became effective, whichever is later. (i) Data...

  4. 12 CFR 614.4170 - General.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... lender. Procedures shall require: (i) The procurement of periodic operating data essential for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii) Inspections... insurance, margin requirements, warehousing, and the prompt exercise of legal options to preserve the lender...

  5. 12 CFR 614.4170 - General.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... lender. Procedures shall require: (i) The procurement of periodic operating data essential for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii) Inspections... insurance, margin requirements, warehousing, and the prompt exercise of legal options to preserve the lender...

  6. 78 FR 43118 - Allegations of Anticompetitive Behavior in Satellite Industry

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-19

    ... Inquiry. SUMMARY: The Federal Communications Commission (Commission) seeks comment on whether, and, if so... Commission seeks comment on whether FSS operators are warehousing satellite orbital locations and frequency assignments, and preventing competitors from purchasing capacity on incumbent-owned satellites. DATES...

  7. Data Mining.

    ERIC Educational Resources Information Center

    Benoit, Gerald

    2002-01-01

    Discusses data mining (DM) and knowledge discovery in databases (KDD), taking the view that KDD is the larger view of the entire process, with DM emphasizing the cleaning, warehousing, mining, and visualization of knowledge discovery in databases. Highlights include algorithms; users; the Internet; text mining; and information extraction.…

  8. A preliminary demonstration of "virtual warehousing" and cross-docking technique with active RFID combined with asset tracking equipment.

    DOT National Transportation Integrated Search

    2010-11-01

    The University of Denvers Intermodal Transportation Institute and System Planning : Corporations GlobalTrak system have successfully demonstrated the integration of GPS : tracking and active RFID monitoring of simulated cargo of pallet and cart...

  9. 47 CFR 52.105 - Warehousing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...

  10. 47 CFR 52.105 - Warehousing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...

  11. 47 CFR 52.105 - Warehousing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...

  12. 47 CFR 52.105 - Warehousing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...

  13. 47 CFR 52.105 - Warehousing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the Service Management System database without having an actual toll free subscriber for whom those... database; or (2) The Responsible Organization does not have an identified toll free subscriber agreeing to... database shall serve as that Responsible Organization's certification that there is an identified toll free...

  14. The prevalence of short sleep duration by industry and occupation in the National Health Interview Survey.

    PubMed

    Luckhaupt, Sara E; Tak, SangWoo; Calvert, Geoffrey M

    2010-02-01

    To explore whether employment in industries likely to have non-standard work schedules (e.g., manufacturing and service) and occupations with long work-weeks (e.g., managerial/professional, sales, and transportation) is associated with an increased risk of short sleep duration. Cross-sectional epidemiologic survey. Household-based face-to-face survey of civilian, non-institutionalized US residents. Sample adults interviewed for the National Health Interview Survey in 1985 or 1990 (N = 74,734) or between 2004 and 2007 (N = 110,422). Most analyses focused on civilian employed workers interviewed between 2004 and 2007 (N = 66,099). N/A. The weighted prevalence of self-reported short sleep duration, defined as < or = 6 h per day, among civilian employed workers from 2004-2007 was 29.9%. Among industry categories, the prevalence of short sleep duration was greatest for management of companies and enterprises (40.5%), followed by transportation/warehousing (37.1%) and manufacturing (34.8%). Occupational categories with the highest prevalence included production occupations in the transportation/warehousing industry, and installation, maintenance, and repair occupations in both the transportation/warehousing industry and the manufacturing industry. In the combined sample from 1985 and 1990, 24.2% of workers reported short sleep duration; the prevalence of short sleep duration was significantly lower during this earlier time period compared to 2004-2007 for 7 of 8 industrial sectors. Self-reported short sleep duration among US workers varies by industry and occupation, and has increased over the past two decades. These findings suggest the need for further exploration of the relationship between work and sleep, and development of targeted interventions for specific industry/occupation groups.

  15. A Survey Report of School Plant Management for Escambia County, Florida.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee.

    This report analyzes data collected by survey teams concerned with maintenance and operation of school plants in relation to organization, administration, budgeting, expenditures, purchasing, staffing, warehousing and distribution, maintenance shops, administrative practices, performance standards, and efficiency. The basic purposes of a…

  16. 27 CFR 18.11 - Meaning of terms.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... under 27 CFR part 19, excluding alcohol fuel plants, for producing, warehousing, or processing distilled... unfermented mixture of juice, pulp, skins, and seeds prepared from fruit, berries, or grapes. High-proof..., exclusive of pulp, skins, or seeds. Person. An individual, trust, estate, partnership, association, company...

  17. 27 CFR 18.11 - Meaning of terms.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... under 27 CFR part 19, excluding alcohol fuel plants, for producing, warehousing, or processing distilled... unfermented mixture of juice, pulp, skins, and seeds prepared from fruit, berries, or grapes. High-proof..., exclusive of pulp, skins, or seeds. Person. An individual, trust, estate, partnership, association, company...

  18. 27 CFR 18.11 - Meaning of terms.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... under 27 CFR part 19, excluding alcohol fuel plants, for producing, warehousing, or processing distilled... unfermented mixture of juice, pulp, skins, and seeds prepared from fruit, berries, or grapes. High-proof..., exclusive of pulp, skins, or seeds. Person. An individual, trust, estate, partnership, association, company...

  19. 27 CFR 18.11 - Meaning of terms.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... under 27 CFR part 19, excluding alcohol fuel plants, for producing, warehousing, or processing distilled... unfermented mixture of juice, pulp, skins, and seeds prepared from fruit, berries, or grapes. High-proof..., exclusive of pulp, skins, or seeds. Person. An individual, trust, estate, partnership, association, company...

  20. 27 CFR 18.11 - Meaning of terms.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... under 27 CFR part 19, excluding alcohol fuel plants, for producing, warehousing, or processing distilled... unfermented mixture of juice, pulp, skins, and seeds prepared from fruit, berries, or grapes. High-proof..., exclusive of pulp, skins, or seeds. Person. An individual, trust, estate, partnership, association, company...

  1. Development of a statewide transportation data warehousing and mining system under the louisiana transportation information system (LATIS) program.

    DOT National Transportation Integrated Search

    2008-06-01

    More jurisdictions including states and metropolitan areas are establishing traffic management centers to assist in reducing congestion. To a lesser extent, these centers are helpful in providing information that assists engineers in making such adju...

  2. 41 CFR 101-30.703 - Program objectives.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30... the Federal catalog system data base; and (e) Phasing out of the Government supply system those items... management, and warehousing costs; then following through to eliminate the items from agency catalog systems...

  3. 41 CFR 101-30.703 - Program objectives.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30... the Federal catalog system data base; and (e) Phasing out of the Government supply system those items... management, and warehousing costs; then following through to eliminate the items from agency catalog systems...

  4. 41 CFR 101-30.703 - Program objectives.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30... the Federal catalog system data base; and (e) Phasing out of the Government supply system those items... management, and warehousing costs; then following through to eliminate the items from agency catalog systems...

  5. 41 CFR 101-30.703 - Program objectives.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30... the Federal catalog system data base; and (e) Phasing out of the Government supply system those items... management, and warehousing costs; then following through to eliminate the items from agency catalog systems...

  6. 41 CFR 101-30.703 - Program objectives.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30... the Federal catalog system data base; and (e) Phasing out of the Government supply system those items... management, and warehousing costs; then following through to eliminate the items from agency catalog systems...

  7. Computers Help Technicians Become Managers.

    ERIC Educational Resources Information Center

    Instructional Innovator, 1984

    1984-01-01

    Briefly describes the Academy of Advanced Traffic's use of the Numerax electronic tariff library in financial management, business logistics management, and warehousing courses to familiarize future traffic managers with time saving computer-based information systems that will free them to become integral members of their company's decision-making…

  8. 77 FR 74186 - Ocean Transportation Intermediary License Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... Logistics, Inc. (NVO & OFF), 7685 NW. 80th Terrace, Medley, FL 33166, Officers: Hugo E. Martinez, Secretary.... Echevarria, President, Application Type: New NVO & OFF License, Global Wide Logistics, Inc. (NVO), 1937 Davis... Member, Application Type: QI Change. Matson Logistics Warehousing, Inc. (NVO & OFF), 1855 Gateway...

  9. Data Warehousing: How To Make Your Statistics Meaningful.

    ERIC Educational Resources Information Center

    Flaherty, William

    2001-01-01

    Examines how one school district found a way to turn data collection from a disparate mountain of statistics into more useful information by using their Instructional Decision Support System. System software is explained as is how the district solved some data management challenges. (GR)

  10. Using IT to improve quality at NewYork-Presybterian Hospital: a requirements-driven strategic planning process.

    PubMed

    Kuperman, Gilad J; Boyer, Aurelia; Cole, Curt; Forman, Bruce; Stetson, Peter D; Cooper, Mary

    2006-01-01

    At NewYork-Presbyterian Hospital, we are committed to the delivery of high quality care. We have implemented a strategic planning process to determine the information technology initiatives that will best help us improve quality. The process began with the creation of a Clinical Quality and IT Committee. The Committee identified 2 high priority goals that would enable demonstrably high quality care: 1) excellence at data warehousing, and 2) optimal use of automated clinical documentation to capture encounter-related quality and safety data. For each high priority goal, a working group was created to develop specific recommendations. The Data Warehousing subgroup has recommended the implementation of an architecture management process and an improved ability for users to get access to aggregate data. The Structured Documentation subgroup is establishing recommendations for a documentation template creation process. The strategic planning process at times is slow, but assures that the organization is focusing on the information technology activities most likely to lead to improved quality.

  11. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  12. A Queriable Repository for HST Telemetry Data, a Case Study in using Data Warehousing for Science and Engineering

    NASA Astrophysics Data System (ADS)

    Pollizzi, J. A.; Lezon, K.

    The Hubble Space Telescope (HST) generates on the order of 7,000 telemetry values, many of which are sampled at 1Hz, and with several hundred parameters being sampled at 40Hz. Such data volumes would quickly tax even the largest of processing facilities. Yet the ability to access the telemetry data in a variety of ways, and in particular, using ad hoc (i.e., no a priori fixed) queries, is essential to assuring the long term viability and usefulness of this instrument. As part of the recent NASA initiative to re-engineer HST's ground control systems, a concept arose to apply newly available data warehousing technologies to this problem. The Space Telescope Science Institute was engaged to develop a pilot to investigate the technology and to create a proof-of-concept testbed that could be demonstrated and evaluated for operational use. This paper describes this effort and its results.

  13. Using IT to Improve Quality at NewYork-Presybterian Hospital: A Requirements-Driven Strategic Planning Process

    PubMed Central

    Kuperman, Gilad J.; Boyer, Aurelia; Cole, Curt; Forman, Bruce; Stetson, Peter D.; Cooper, Mary

    2006-01-01

    At NewYork-Presbyterian Hospital, we are committed to the delivery of high quality care. We have implemented a strategic planning process to determine the information technology initiatives that will best help us improve quality. The process began with the creation of a Clinical Quality and IT Committee. The Committee identified 2 high priority goals that would enable demonstrably high quality care: 1) excellence at data warehousing, and 2) optimal use of automated clinical documentation to capture encounter-related quality and safety data. For each high priority goal, a working group was created to develop specific recommendations. The Data Warehousing subgroup has recommended the implementation of an architecture management process and an improved ability for users to get access to aggregate data. The Structured Documentation subgroup is establishing recommendations for a documentation template creation process. The strategic planning process at times is slow, but assures that the organization is focusing on the information technology activities most likely to lead to improved quality. PMID:17238381

  14. Role of data warehousing in healthcare epidemiology.

    PubMed

    Wyllie, D; Davies, J

    2015-04-01

    Electronic storage of healthcare data, including individual-level risk factors for both infectious and other diseases, is increasing. These data can be integrated at hospital, regional and national levels. Data sources that contain risk factor and outcome information for a wide range of conditions offer the potential for efficient epidemiological analysis of multiple diseases. Opportunities may also arise for monitoring healthcare processes. Integrating diverse data sources presents epidemiological, practical, and ethical challenges. For example, diagnostic criteria, outcome definitions, and ascertainment methods may differ across the data sources. Data volumes may be very large, requiring sophisticated computing technology. Given the large populations involved, perhaps the most challenging aspect is how informed consent can be obtained for the development of integrated databases, particularly when it is not easy to demonstrate their potential. In this article, we discuss some of the ups and downs of recent projects as well as the potential of data warehousing for antimicrobial resistance monitoring. Copyright © 2015. Published by Elsevier Ltd.

  15. 77 FR 4006 - Foreign-Trade Zone 45-Portland, Oregon; Expansion of Manufacturing Authority; Epson Portland, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ..., Oregon; Expansion of Manufacturing Authority; Epson Portland, Inc. (Inkjet Ink Manufacturing); Portland... manufacturing (injection molding, assembly, finishing), warehousing and distribution of inkjet printer cartridges. The current request involves the production of ink for inkjet printer cartridges using foreign...

  16. 77 FR 10943 - Small Business Size Standards: Transportation and Warehousing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ... assistance programs, SBA establishes small business size definitions (referred to as size standards) for... business concern * * *'' Sec. 3(a)(2)(C)(ii)(II) [emphasis added]. Third, SBA's existing definitions of... establishes distinct definitions to determine which businesses are deemed small businesses. The Small Business...

  17. An assessment of the common carrier shipping environment

    Treesearch

    F. E. Ostrem; W. D. Godshall

    1979-01-01

    An assessment of available data and information describing the common carrier shipping environment was conducted. The assesment included the major shipping hazards of shock, vibration, impact, temperature, and humidity associated with the handling, transportation, and warehousing operations of typical distribution cycles. Previous environmental studies and current data...

  18. External Corrosion of Tinplate Ration Food Cans Under Tropical Field Storage

    DTIC Science & Technology

    1987-04-01

    can with the subsequent risk of spoilage, nor detracts from the cosmetic appearance of the can. Hartwell (1956) states that the obvious standard for...not satisfactory, where cimatic condiions are conducive to rust formation, and where warehousing conditions are poor Beyer (1965) reported that

  19. Mix Master

    ERIC Educational Resources Information Center

    Waters, John K.

    2008-01-01

    Data integration is one of the single most challenging tasks any district can face. Fortunately for school districts throughout the country with data scattered in disparate systems, an open specification known as the Schools Interoperability Framework (SIF) is mitigating that challenge. SIF has emerged as a cornerstone of K-12 data warehousing,…

  20. Educational Publishing: Experiences from Asia and the Pacific.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific and Cultural Organization, Bangkok (Thailand). Asian Centre for Educational Innovation for Development.

    This resource book on educational publishing presents examples of evaluation and planning; try-out procedures; the production process; and warehousing and distribution, all reinforced by examples of systems and structures and case studies which were presented at the 1985 Manila and Tonga Seminars. Part one, Planning, Try-out and Evaluation of…

  1. The Role of the Quality Enhancement Plan in Engendering a Culture of Assessment

    ERIC Educational Resources Information Center

    Loughman, Thomas P.; Hickson, Joyce; Sheeks, Gina L.; Hortman, J. William

    2008-01-01

    During the past two decades, colleges and universities have used best practices from corporate management such as total quality management, strategic planning, management by objectives, benchmarking, data warehousing, and performance indicators. Many institutions of higher learning now have adopted comprehensive and multifaceted approaches to…

  2. 29 CFR 779.208 - Auxiliary activities which are “related activities.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Auxiliary activities which are ârelated activities.â 779...; Enterprise Coverage Related Activities § 779.208 Auxiliary activities which are “related activities.” As... activities, such as central office and warehousing activities and bookkeeping, auditing, purchasing...

  3. Experiential Learning Using QlikView Business Intelligence Software

    ERIC Educational Resources Information Center

    Podeschi, R. J.

    2015-01-01

    This paper reports on the use of QlikView business intelligence software for use in a Business Intelligence (BI) course within an undergraduate information systems program. The course provides students with concepts related to data warehousing, data mining, visualizations, and software tools to provide business intelligence solutions for decision…

  4. Trade Related Reading Packets for Disabled Readers.

    ERIC Educational Resources Information Center

    Davis, Beverly; Woodruff, Nancy S.

    Six trade-related reading packets for disabled readers are provided for these trades: assemblers, baking, building maintenance, data entry, interior landscaping, and warehousing. Each packet stresses from 9 to 14 skills. Those skills common to most packets include context clues, fact or opinion, details, following directions, main idea,…

  5. 29 CFR 779.209 - Vertical activities which are “related activities.”

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... structure such as the manufacturing, warehousing, and retailing of a particular product or products.” Where... business activities, such as a wholesaling and retailing or manufacturing and retailing, are interrelated... to carry out a business purpose of the manufacturing plant, retailing and manufacturing will be...

  6. 29 CFR 779.209 - Vertical activities which are “related activities.”

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... structure such as the manufacturing, warehousing, and retailing of a particular product or products.” Where... business activities, such as a wholesaling and retailing or manufacturing and retailing, are interrelated... to carry out a business purpose of the manufacturing plant, retailing and manufacturing will be...

  7. 29 CFR 779.209 - Vertical activities which are “related activities.”

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... structure such as the manufacturing, warehousing, and retailing of a particular product or products.” Where... business activities, such as a wholesaling and retailing or manufacturing and retailing, are interrelated... to carry out a business purpose of the manufacturing plant, retailing and manufacturing will be...

  8. 29 CFR 779.209 - Vertical activities which are “related activities.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... structure such as the manufacturing, warehousing, and retailing of a particular product or products.” Where... business activities, such as a wholesaling and retailing or manufacturing and retailing, are interrelated... to carry out a business purpose of the manufacturing plant, retailing and manufacturing will be...

  9. 29 CFR 779.209 - Vertical activities which are “related activities.”

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... structure such as the manufacturing, warehousing, and retailing of a particular product or products.” Where... business activities, such as a wholesaling and retailing or manufacturing and retailing, are interrelated... to carry out a business purpose of the manufacturing plant, retailing and manufacturing will be...

  10. 22 CFR 202.5 - Approval of programs, projects and services.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and taxes; (iv) The supplies will be treated as a supplementary resource; (v) The supplies will be... supplies will be received, unloaded, warehoused, and transported cost-free to points of distribution; (3... assumed by the agency for the noncommercial distribution of the supplies free of cost to the persons...

  11. 75 FR 64699 - Grant of Authority for Subzone Status; VF Corporation (Apparel, Footwear and Luggage Distribution...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Order No. 1714] Grant of Authority for Subzone Status; VF Corporation (Apparel, Footwear and Luggage Distribution), Martinsville, VA Pursuant to its... authority for subzone status for activity related to apparel, footwear and luggage warehousing and...

  12. 75 FR 38077 - Grant of Authority for Subzone Status; Abercrombie & Fitch (Footwear and Apparel Distribution...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Order No. 1687] Grant of Authority for Subzone Status; Abercrombie & Fitch (Footwear and Apparel Distribution); New Albany, OH Pursuant to its authority... footwear and apparel warehousing and distribution at the facility of Abercrombie & Fitch, located in New...

  13. Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data

    DTIC Science & Technology

    2016-05-27

    Page | 1 . . . . . . . . . . . . . . . . . . . Intelligent Automation Incorporated Enhancements for a Dynamic Data...301.294.4241, osavas@i-a-i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Top K User and...5 3 Work to be Performed in the Next Reporting Period

  14. Failed Hopes of Education: Revisiting the Relevancy of Education as a Method of Diminishing Recidivism

    ERIC Educational Resources Information Center

    McElreath, David H.; Doss, Daniel Adrian; Jensen, Carl; Mallory, Stephen; Wigginton, Michael; Lyons, Terry; Williamson, Lorri C.; McElreath, Leisa S.

    2018-01-01

    This article describes how, generally, the majority of inmates will recidivate again within five years of being released from incarceration. Recidivism represents cyclical criminality that affects all American communities. Despite substantial expenditures toward the warehousing of inmates within the corrections system, less emphasisis directed…

  15. Development of a statewide transportation data warehousing and mining system under the Louisiana transportation information system (LATIS) program : technical summary report.

    DOT National Transportation Integrated Search

    2009-04-01

    Considerable amounts of data are collected by metropolitan and state authorities that : are either not used or accessed and analyzed with difficulty. This situation has been : exacerbated by the recent increase in information from Intelligent Transpo...

  16. Data-Driven Decision-Making: It's a Catch-Up Game

    ERIC Educational Resources Information Center

    Briggs, Linda L.

    2006-01-01

    Having an abundance of data residing in individual silos across campus, but little decision-ready information, is a typical scenario at many institutions. One problem is that the terms "data warehousing" and "business intelligence" refer to very different things, although the two often go hand-in-hand. "Data…

  17. Developing a Statewide Student Tracking Tool

    ERIC Educational Resources Information Center

    Pai, Wendell; Moschos, Marina; Detlev, Angela; Robinson, Ophelia; Lanneau, Sumi

    2008-01-01

    This chapter describes the development of a state-level student tracking system that enables the state and institutions to follow various cohort types of students across institutional boundaries. The system was developed by the Policy Research and Data Warehousing section of the State Council of Higher Education for Virginia (SCHEV). SCHEV is the…

  18. Data Warehousing: An Aid to Decision-Making

    ERIC Educational Resources Information Center

    Pare, Roland; Elovitz, Leonard H.

    2005-01-01

    Educators are constantly looking for the magic bullet--that intervention which will miraculously result in higher achievement scores for students, as well as happier and more productive teachers. The pressure for improvement has caused education to become a "bandwagon" industry. Therefore, educational leaders adopt the latest fad in hopes that it…

  19. 24 CFR 576.107 - HMIS component.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Integrating and warehousing data, including development of a data warehouse for use in aggregating data from... the costs of contributing data to the HMIS designated by the Continuum of Care for the area, including..., phone service, and high-speed data transmission necessary to operate or contribute data to the HMIS...

  20. 24 CFR 576.107 - HMIS component.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) Integrating and warehousing data, including development of a data warehouse for use in aggregating data from... the costs of contributing data to the HMIS designated by the Continuum of Care for the area, including..., phone service, and high-speed data transmission necessary to operate or contribute data to the HMIS...

  1. 24 CFR 576.107 - HMIS component.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) Integrating and warehousing data, including development of a data warehouse for use in aggregating data from... the costs of contributing data to the HMIS designated by the Continuum of Care for the area, including..., phone service, and high-speed data transmission necessary to operate or contribute data to the HMIS...

  2. 12 CFR 329.103 - Premiums.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., including shipping, warehousing, packaging, and handling costs) does not exceed $10 for a deposit of less... the balance in a demand deposit account and the duration of the account balance shall not be considered the payment of interest on a demand deposit account and shall not be subject to the limitations in...

  3. 12 CFR 329.103 - Premiums.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., including shipping, warehousing, packaging, and handling costs) does not exceed $10 for a deposit of less... the balance in a demand deposit account and the duration of the account balance shall not be considered the payment of interest on a demand deposit account and shall not be subject to the limitations in...

  4. Data Warehousing: Too Much Information

    ERIC Educational Resources Information Center

    Brown, Justine

    2006-01-01

    Over the past five years, the No Child Left Behind Act, with its mandates for collecting and documenting student achievement statistics, has fueled a surge in the volume of data collected by schools, districts, and state education departments. This information explosion has gone hand in hand with technology's growing affordability and…

  5. 77 FR 70481 - Fasco, A Division of Regal Beloit Corporation, Including On-Site Leased Workers From Penmac...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... distribution of electric motors, as well as engineering, customer service and information technology (IT... portion of the supply of engineering services (or like or directly competitive services) to a foreign..., Missouri, who were engaged in employment related to the supply of warehousing, distribution, engineering...

  6. 76 FR 2713 - Croscill Acquisition, LLC, Currently Known as Croscill Home, LLC, Plant No. 8, Oxford, NC...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ..., Currently Known as Croscill Home, LLC, Plant No. 8, Oxford, NC; Amended Certification Regarding Eligibility..., LLC, formerly doing business as Royal Home Fashions, a subsidiary of Croscill, Inc., Plant No. 8...). The workers are engaged in the supply of warehousing and distribution services of household products...

  7. 75 FR 17125 - Foreign-Trade Zone 157-Casper, Wyoming, Application for Expansion

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ...): Proposed Site 2 (984 acres) Casper Logistics Hub, located adjacent to and northeast of the airport at 6... Industrial Ranch, LLC and the Casper Logistics Hub. The site will be used to provide logistics, warehousing and distribution services to area businesses. No specific manufacturing authority is being requested...

  8. 76 FR 87 - Grant of Authority for Subzone Status; Skechers USA, LLC (Distribution of Footwear); Moreno...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-03

    ... Status; Skechers USA, LLC (Distribution of Footwear); Moreno Valley, California Pursuant to its authority... distribution facility of Skechers USA, LLC, located in Moreno Valley, California, (FTZ Docket 5- 2008, filed 2... activity related to footwear warehousing and distribution at the facility of Skechers USA, LLC, located in...

  9. 75 FR 56991 - Grant of Authority for Subzone Status Michelin North America, Inc. (Tire Distribution and Wheel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-17

    ... Status Michelin North America, Inc. (Tire Distribution and Wheel Assembly) Baltimore, MD Pursuant to its... warehouse/distribution and wheel assembly facility of Michelin North America, Inc., located in Elkton, MD... tire accessories warehousing and distribution and wheel assembly at the facility of Michelin North...

  10. 77 FR 28851 - Foreign-Trade Zone 126-Reno, NV; Notification of Proposed Production Activity; Brightpoint North...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ...; Notification of Proposed Production Activity; Brightpoint North America L.P. (Cell Phone Kitting and... for cell phone kitting, warehousing and distribution operations. Production under FTZ procedures could... procedures that apply to cell phone kits (duty free) for the foreign status inputs noted below. Customs...

  11. Real-Time Data Warehousing and On-Line Analytical Processing at Aberdeen Test Center’s Distributed Center

    DTIC Science & Technology

    2005-12-01

    data collected via on-board instrumentation -VxWorks based computer. Each instrument produces a continuous time history record of up to 250...data in multidimensional hierarchies and views. UGC 2005 Institute a high performance data warehouse • PostgreSQL 7.4 installed on dedicated filesystem

  12. Factors Influencing BI Data Collection Strategies: An Empirical Investigation

    ERIC Educational Resources Information Center

    Ramakrishnan, Thiagarajan

    2010-01-01

    The purpose of this dissertation is to examine the external factors that influence an organizations' business intelligence (BI) data collection strategy when mediated by BI attributes. In this dissertation, data warehousing strategies are used as the basis on which to frame the exploration of BI data collection strategies. The attributes include…

  13. Data-Driven Decision-Making: Data Pioneers

    ERIC Educational Resources Information Center

    Briggs, Linda L.

    2006-01-01

    Everyone on your campus needs information, and if your institution is like most schools, you have plenty of it to share. But which types of data warehousing and business intelligence systems you choose, and how accessible, usable, and meaningful those tools make all of that information, remain the big questions for many technologists and…

  14. Operating Hours Based Inventory Management.

    DTIC Science & Technology

    1986-12-01

    forecasting can be based on the expert opinion. The Delphi technique is one such method of forecasting which uses a group of decision makers with a...errors occ.ur. If larce a’nounts of’ material are procured and warehoused, there could be a greater chance that the material will no longer be needed

  15. Data-Driven Decision Making: The "Other" Data

    ERIC Educational Resources Information Center

    Villano, Matt

    2007-01-01

    Data is a daily reality for school systems. Between standardized tests and tools from companies that offer data warehousing services, educators and district superintendents alike are up to their eyeballs in facts and figures about student performance that they can use as the basis for curricular decisions. Still, there is more to assessment than…

  16. InterMine Webservices for Phytozome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Hayes, David; Goodstein, David

    2014-01-10

    A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data

  17. Garbage in, Garbage Stays: How ERPs Could Improve Our Data-Quality Issues

    ERIC Educational Resources Information Center

    Riccardi, Richard I.

    2009-01-01

    As universities begin to implement business intelligence tools such as end-user reporting, data warehousing, and dashboard indicators, data quality becomes an even greater and more public issue. With automated tools taking nightly snapshots of the database, the faulty data grow exponentially, propagating as another layer of the data warehouse.…

  18. The Prevalence of Short Sleep Duration by Industry and Occupation in the National Health Interview Survey

    PubMed Central

    Luckhaupt, Sara E.; Tak, SangWoo; Calvert, Geoffrey M.

    2010-01-01

    Study Objectives: To explore whether employment in industries likely to have non-standard work schedules (e.g., manufacturing and service) and occupations with long work-weeks (e.g., managerial/ professional, sales, and transportation) is associated with an increased risk of short sleep duration. Design: Cross-sectional epidemiologic survey. Setting: Household-based face-to-face survey of civilian, non-institutionalized US residents. Participants: Sample adults interviewed for the National Health Interview Survey in 1985 or 1990 (N = 74,734) or between 2004 and 2007 (N = 110,422). Most analyses focused on civilian employed workers interviewed between 2004 and 2007 (N = 66,099). Interventions: N/A Measurements and Results: The weighted prevalence of self-reported short sleep duration, defined as ≤6 h per day, among civilian employed workers from 2004-2007 was 29.9%. Among industry categories, the prevalence of short sleep duration was greatest for management of companies and enterprises (40.5%), followed by transportation/warehousing (37.1%) and manufacturing (34.8%). Occupational categories with the highest prevalence included production occupations in the transportation/warehousing industry, and installation, maintenance, and repair occupations in both the transportation/warehousing industry and the manufacturing industry. In the combined sample from 1985 and 1990, 24.2% of workers reported short sleep duration; the prevalence of short sleep duration was significantly lower during this earlier time period compared to 2004–2007 for 7 of 8 industrial sectors. Conclusions: Self-reported short sleep duration among US workers varies by industry and occupation, and has increased over the past two decades. These findings suggest the need for further exploration of the relationship between work and sleep, and development of targeted interventions for specific industry/occupation groups. Citation: Luckhaupt SE; Tak S; Calvert GM. The prevalence of short sleep duration by industry and occupation in the National Health Interview Survey. SLEEP 2010;33(2):149-159 PMID:20175398

  19. P19-S Managing Proteomics Data from Data Generation and Data Warehousing to Central Data Repository and Journal Reviewing Processes

    PubMed Central

    Thiele, H.; Glandorf, J.; Koerting, G.; Reidegeld, K.; Blüggel, M.; Meyer, H.; Stephan, C.

    2007-01-01

    In today’s proteomics research, various techniques and instrumentation bioinformatics tools are necessary to manage the large amount of heterogeneous data with an automatic quality control to produce reliable and comparable results. Therefore a data-processing pipeline is mandatory for data validation and comparison in a data-warehousing system. The proteome bioinformatics platform ProteinScape has been proven to cover these needs. The reprocessing of HUPO BPP participants’ MS data was done within ProteinScape. The reprocessed information was transferred into the global data repository PRIDE. ProteinScape as a data-warehousing system covers two main aspects: archiving relevant data of the proteomics workflow and information extraction functionality (protein identification, quantification and generation of biological knowledge). As a strategy for automatic data validation, different protein search engines are integrated. Result analysis is performed using a decoy database search strategy, which allows the measurement of the false-positive identification rate. Peptide identifications across different workflows, different MS techniques, and different search engines are merged to obtain a quality-controlled protein list. The proteomics identifications database (PRIDE), as a public data repository, is an archiving system where data are finally stored and no longer changed by further processing steps. Data submission to PRIDE is open to proteomics laboratories generating protein and peptide identifications. An export tool has been developed for transferring all relevant HUPO BPP data from ProteinScape into PRIDE using the PRIDE.xml format. The EU-funded ProDac project will coordinate the development of software tools covering international standards for the representation of proteomics data. The implementation of data submission pipelines and systematic data collection in public standards–compliant repositories will cover all aspects, from the generation of MS data in each laboratory to the conversion of all the annotating information and identifications to a standardized format. Such datasets can be used in the course of publishing in scientific journals.

  20. Something old, something new: data warehousing in the digital age

    NASA Astrophysics Data System (ADS)

    Maguire, Rob; Woolf, Andrew

    2015-04-01

    The implications of digital transformation for Earth science data managers are significant: big data, internet of things, new sources of third-party observations. This at a time when many are struggling to deal with half a century of legacy data infrastructure since the International Geophysical Year. While data management best practice has evolved over this time, large-scale migration activities are rare, with processes and applications instead built up around a plethora of different technologies and approaches. It is perhaps more important than ever, before embarking on major investments in new technologies, to consider the benefits first of 'catching up' with mature best-practice. Data warehousing, as an architectural formalism, was developed in the 1990s as a response to the growing challenges in corporate environments of assembling, integrating, and quality controlling large amounts of data from multiple sources and for multiple purposes. A layered architecture separates transactional data, integration and staging areas, the warehouse itself, and analytical 'data marts', with optimised ETL (Extract, Transform, Load) processes used to promote data through the layers. The data warehouse, together with associated techniques of 'master data management' and 'business intelligence', provides a classic foundation for 'enterprise information management' ("an integrative discipline for structuring, describing and governing information assets across organizational and technological boundaries to improve efficiency, promote transparency and enable business insight", Gartner). The Australian Bureau of Meteorology, like most Earth-science agencies, maintains a large amount of observation data in a variety of systems and architectures. These data assets evolve over decades, usually for operational, rather than information management, reasons. Consequently there can be inconsistency in architectures and technologies. We describe our experience with two major data assets: the Australian Water Resource Information System (AWRIS) and the Australian Data Archive for Meteorology (ADAM). These maintain the national archive of hydrological and climate data. We are undertaking a migration of AWRIS from a 'software-centric' system to a 'data-centric' warehouse, with significant benefits in performance, scalability, and maintainability. As well, the architecture supports the use of conventional BI tools for product development and visualisation. We have also experimented with a warehouse ETL replacement for custom tsunameter ingest code in ADAM, with considerable success. Our experience suggests that there is benefit to be gained through adoption by science agencies of professional IT best practice that is mature in industry but may have been overlooked by scientific information practitioners. In the case of data warehousing, the practice requires a change of perspective from a focus on code development to a focus on data. It will continue to be relevant in the 'digital age' as vendors increasingly support integrated warehousing and 'big data' platforms.

  1. Report on the Warehousing and Distribution Functions of the Division of Supply and Property Management.

    ERIC Educational Resources Information Center

    Richardson, William M.; Baacke, Clifford M.

    The centralized warehouse concept as utilized by Montgomery County (Maryland) Public Schools is examined in this report detailing the operation of the warehouse facility and distribution system. After an executive summary, an addendum detailing comments by the staff of the Office of the Associate Superintendent for Supportive Services conflicting…

  2. An Empirical Study of Logistics Organization, Electronic Linkage, and Performance

    DTIC Science & Technology

    1993-01-01

    utilization of transportation resources, and improved quality management. Researchers have proposed an information technology (IT) implementation model for...management, more efficient utilization of transportation resources, and improved quality management. Researchers have proposed an information...coordination of (1) facility structure, (2) forecasting and order management, (3) transportation , (4) inventory, and (5) warehousing and packaging. The

  3. 15 CFR Appendix C to Part 30 - Summary of Exemptions and Exclusions From EEI Filing

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... or transshipped and exported directly from the port of arrival never having made entry into the United States. If entry for consumption or warehousing in the United States is made, then an EEI is... foreign country. Goods that were imported under bond for processing and re-exportation are not covered by...

  4. 15 CFR Appendix C to Part 30 - Summary of Exemptions and Exclusions From EEI Filing

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... or transshipped and exported directly from the port of arrival never having made entry into the United States. If entry for consumption or warehousing in the United States is made, then an EEI is... foreign country. Goods that were imported under bond for processing and re-exportation are not covered by...

  5. Studying the Impact of Federal and State Changes in Student Aid Policy at the Campus Level.

    ERIC Educational Resources Information Center

    Fenske, Robert H.; Dillon, Kathryn A.; Porter, John D.

    1997-01-01

    Argues that shifts in government policies can produce unintended consequences for needy students and the institutions they attend, and illustrates how campus units can cooperate to examine the impact of these changes through creation of longitudinal databases and data warehousing techniques. Describes the approach used and results of a study at…

  6. Developing Data Systems To Support the Analysis and Development of Large-Scale, On-Line Assessment.

    ERIC Educational Resources Information Center

    Yu, Chong Ho

    Today many data warehousing systems are data rich, but information poor. Extracting useful information from an ocean of data to support administrative, policy, and instructional decisions becomes a major challenge to both database designers and measurement specialists. This paper focuses on the development of a data processing system that…

  7. 77 FR 48959 - Foreign-Trade Zone 133-Quad-Cities, Iowa/Illinois Application for Reorganization Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-15

    ... Cities Industrial Center, 200 East 90th Street, Davenport, Iowa; Site 2 (33 acres)--Rock Island Arsenal, 1775 East Street, Rock Island, Illinois; Site 3 (55 acres)--Modern Warehousing, 801 1st Street East..., Mercer, Rock Island and Warren Counties, Illinois as well as Cedar, Clinton, Des Moines, Dubuque, Henry...

  8. Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data

    DTIC Science & Technology

    2016-06-20

    YouTube Data Collection ................................................................................. 2 1.1.1 Scraawl YouTube Data Collection API...Development ............................... 2 1.1.2 Scraawl UI Development for YouTube Searches ....................................... 3 2 Current...following tasks.  Developed automated YouTube data collection capabilities. We have developed automated YouTube video metadata collection capabilities and

  9. A Proposal for Kelly CriterionBased Lossy Network Compression

    DTIC Science & Technology

    2016-03-01

    warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In

  10. Robots Spur Software That Lends a Hand

    NASA Technical Reports Server (NTRS)

    2014-01-01

    While building a robot to assist astronauts in space, Johnson Space Center worked with partners to develop robot reasoning and interaction technology. The partners created Robonaut 1, which led to Robonaut 2, and the work also led to patents now held by Universal Robotics in Nashville, Tennessee. The NASA-derived technology is available for use in warehousing, mining, and more.

  11. Warehousing Human Beings: A Review of the New York State Correctional System.

    ERIC Educational Resources Information Center

    New York State Advisory Committee to the U.S. Commission on Civil Rights, New York.

    In 1970, the New York Advisory Committee to the United States Commission on Civil Rights undertook a study of the State Department of Correctional Services. Using information obtained from observations and from interviews with officials, staff, and inmates, the investigation focused upon the impact of the system on minorities and women. In the…

  12. 76 FR 50268 - Croscill Acquisition, LLC, Currently Known as Croscill Home, LLC, Plant No. 8, Including On-Site...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ..., Currently Known as Croscill Home, LLC, Plant No. 8, Including On-Site Leased Workers From Ex-Cell Home... Fashions, a subsidiary of Croscill, Inc., Plant No. 8, Oxford, North Carolina. The notice was published in... January 14, 2011 (76 FR 2713). The workers are engaged in warehousing and distribution services of...

  13. The Impact of Supply Chain Business Processes on Competitive Advantage and Organizational Performance

    DTIC Science & Technology

    2012-03-22

    Manager, Vice President (VP) Distribution & Fulfillment, Transportation Manager, VP of Supply Chain Management, Production Manager, Director of...Logistics/ Transportation /Distribution (75%), and Supply/Purchasing/Procurement (25%) were identified as functions that best describe the respondents...manufacturing industry (50%), one respondent represented the wholesale trade (12.5%), the retail trade (12.5%), and the transportation and warehousing

  14. Recommended Financial Plan for the Construction of a Permanent Campus for San Joaquin Delta College.

    ERIC Educational Resources Information Center

    Bortolazzo, Julio L.

    The financial plan for the San Joaquin Delta College (California) permanent campus is presented in a table showing the gross square footage, the unit cost (including such fixed equipment as workbenches, laboratory tables, etc.), and the estimated total cost for each department. The unit costs per square foot vary from $18.00 for warehousing to…

  15. Internal Drivers of External Flexibility: A Detailed Analysis

    DTIC Science & Technology

    2007-08-14

    example, order processing within a supplier’s firm is a competence. Meeting customer demand by providing a consistent delivery schedule is a capability...Focus interview on the following logistics areas: a. order processing b. inventory c. transportation d. warehousing, materials handling...demands. In logistics, superior service depends upon order processing (Byrne and Markham 1991); quality of contact personnel (Innis and LaLonde 1994

  16. 'Keep Them Students Busy': 'Warehoused' or Taught Skills to Achieve?

    ERIC Educational Resources Information Center

    Cornish, Carlene

    2018-01-01

    RPA (Raising of Participation Age) legislation re-positioned all youth in England to participate in post-16 education and training, the ultimate aim to develop 'human capital'. However, how does RPA play out in practice with previously NEET (not in education, employment or training) and so-called disengaged youth engaged on a Level 1…

  17. Near Real-Time Data Warehousing Using State-of-the-Art ETL Tools

    NASA Astrophysics Data System (ADS)

    Jörg, Thomas; Dessloch, Stefan

    Data warehouses are traditionally refreshed in a periodic manner, most often on a daily basis. Thus, there is some delay between a business transaction and its appearance in the data warehouse. The most recent data is trapped in the operational sources where it is unavailable for analysis. For timely decision making, today's business users asks for ever fresher data.

  18. The physiology analysis system: an integrated approach for warehousing, management and analysis of time-series physiology data.

    PubMed

    McKenna, Thomas M; Bawa, Gagandeep; Kumar, Kamal; Reifman, Jaques

    2007-04-01

    The physiology analysis system (PAS) was developed as a resource to support the efficient warehousing, management, and analysis of physiology data, particularly, continuous time-series data that may be extensive, of variable quality, and distributed across many files. The PAS incorporates time-series data collected by many types of data-acquisition devices, and it is designed to free users from data management burdens. This Web-based system allows both discrete (attribute) and time-series (ordered) data to be manipulated, visualized, and analyzed via a client's Web browser. All processes occur on a server, so that the client does not have to download data or any application programs, and the PAS is independent of the client's computer operating system. The PAS contains a library of functions, written in different computer languages that the client can add to and use to perform specific data operations. Functions from the library are sequentially inserted into a function chain-based logical structure to construct sophisticated data operators from simple function building blocks, affording ad hoc query and analysis of time-series data. These features support advanced mining of physiology data.

  19. Any information, anywhere, anytime for the warfighter

    NASA Astrophysics Data System (ADS)

    Lazaroff, Mark B.; Sage, Philip A.

    1997-06-01

    The objective of the DARPA battlefield awareness data dissemination (BADD) program is to deliver battlefield awareness information to the warfighter -- anywhere, anytime. BADD is an advanced concept technology demonstration (ACTD) to support proof of concept technology demonstrations and experiments with a goal of introducing new technology to support the operational needs and acceptance of the warfighter. BADD's information management technology provides a 'smart' push of information to the users by providing information subscription services implemented via user- generated profiles. The system also provides services for warfighter pull or 'reach-back' of information via ad hoc query support. The high bandwidth delivery of informtion via the Global Broadcast System (GBS) satellites enables users to receive battlefield awareness information virtually anywhere. Very similar goals have been established for data warehousing technology -- that is, deliver the right information, to the right user, at the right time so that effective decisions can be made. In this paper, we examine the BADD Phase II architecture and underlying information management technoloyg in the context of data warehousing technology and a data warehouse reference architecture. In particular, we foucs on the BADD segment that PSR is building, the Interface to Information Sources (I2S).

  20. Supply Management Analysis of the Chilean Navy Acquisition System

    DTIC Science & Technology

    2014-12-01

    52 LIST OF REFERENCES Armada de Chile, N. (1986). Manual de Logistica de la Armada de Chile [Manual of logistics of the Chilean Navy]. Chile... transportation • Quality control • Demand and supply planning • Receiving, materials handling, and storage 11 • Material or inventory control...Order purchasing • Production planning, scheduling, and control • Warehousing and distribution • Shipping • Outbound transportation • Customer

  1. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  2. Federal Logistics Information System. FLIS Procedures Manual Publications. Volume 15.

    DTIC Science & Technology

    1995-01-01

    which provides for the processing of adjustments/revisions to established item identifications and characteristics in the FLIS Data Base. Item Logistics...A function in FLIS which provides for the processing of adjustments/revisions to established item identifications and characteristics in the FLIS...the materiel management functions for assigned items. Mechanization of Warehousing and Shipment Processing (MOWASP). A uniform data 6 system designed

  3. Airmobile Shelter Analysis. Volume 2

    DTIC Science & Technology

    1994-02-01

    effort are: 10 Shelter functions: billets, command and control, administration, maintenance shops, warehousing, medical, kitchens , dining halls...military inventory. The Harvest Eagle family of shelters contains three pole-supported tents - the GP Medium and GP Large tents, and the M-1948 kitchen I...Supported i...brids -PassiveT -Load-Bearing Active Panel Walls Figure 2. Hierarchy of Shelter Categories. vIV]. The M-1948 kitchen has been replaced by

  4. U.S. Geological Survey

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The Hydrologic Instrumentation Facility (HIF) at Stennis Space Center is a unique high-tech facility that provides hydrologic instrumentation support to the U. S. Geological Survey and other federal agencies worldwide. The HIF has the responsibility for warehousing, testing, evaluating, designing, repairing, and calibrating numerous pieces of hydrologic instrumentation, which is used in studying water on the surface, in the soil, and in the atmosphere of the Earth.

  5. A Homegrown Design for Data Warehousing: A District Customizes Its Own Process for Generating Detailed Information about Students in Real Time

    ERIC Educational Resources Information Center

    Thompson, Terry J.; Gould, Karen J.

    2005-01-01

    In recent years the Metropolitan School District of Wayne Township in Indianapolis has been awash in data. In attempts to improve levels of student achievement, the authors collected all manner of statistical details about students and schools and attempted to perform data analysis as part of the school improvement process. The authors were never…

  6. Data Warehousing at the Marine Corps Institute

    DTIC Science & Technology

    2003-09-01

    applications exists for several reasons. It allows for data to be extracted from many sources, by “cleaned”, and stored into one large data facility ...exists. Key individuals at MCI, or the so called “knowledge workers” will be educated , and try to brainstorm possible data relationships that can...They include querying and reporting, On-Line Analytical Processing (OLAP) and statistical analysis, and data mining. 1. Queries and Reports The

  7. Management Review: Progress and Challenges at the Defense Logistics Agency.

    DTIC Science & Technology

    1986-04-01

    with safety and worklife problems (warehousing schemes, replacement or improvement of equipment, loading dock shelters, and employee orientation systems... balances . Accuracy of DCASR Contingent The contingent liability record is one of the more important records Liability Records maintained by DCASRs because...needed for making management decisions and for certifying to the accu- racy of ULO balances . Problems in Data Reported to Based on our interviews with

  8. Defending against Attribute-Correlation Attacks in Privacy-Aware Information Brokering

    NASA Astrophysics Data System (ADS)

    Li, Fengjun; Luo, Bo; Liu, Peng; Squicciarini, Anna C.; Lee, Dongwon; Chu, Chao-Hsien

    Nowadays, increasing needs for information sharing arise due to extensive collaborations among organizations. Organizations desire to provide data access to their collaborators while preserving full control over the data and comprehensive privacy of their users. A number of information systems have been developed to provide efficient and secure information sharing. However, most of the solutions proposed so far are built atop of conventional data warehousing or distributed database technologies.

  9. Multidimensional Data Modeling for Business Process Analysis

    NASA Astrophysics Data System (ADS)

    Mansmann, Svetlana; Neumuth, Thomas; Scholl, Marc H.

    The emerging area of business process intelligence attempts to enhance the analytical capabilities of business process management systems by employing data warehousing and mining technologies. This paper presents an approach to re-engineering the business process modeling in conformity with the multidimensional data model. Since the business process and the multidimensional model are driven by rather different objectives and assumptions, there is no straightforward solution to converging these models.

  10. Data Stream Mining

    NASA Astrophysics Data System (ADS)

    Gaber, Mohamed Medhat; Zaslavsky, Arkady; Krishnaswamy, Shonali

    Data mining is concerned with the process of computationally extracting hidden knowledge structures represented in models and patterns from large data repositories. It is an interdisciplinary field of study that has its roots in databases, statistics, machine learning, and data visualization. Data mining has emerged as a direct outcome of the data explosion that resulted from the success in database and data warehousing technologies over the past two decades (Fayyad, 1997,Fayyad, 1998,Kantardzic, 2003).

  11. Application service provider (ASP) financial models for off-site PACS archiving

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Liu, Brent J.; McCoy, J. Michael; Enzmann, Dieter R.

    2003-05-01

    For the replacement of its legacy Picture Archiving and Communication Systems (approx. annual workload of 300,000 procedures), UCLA Medical Center has evaluated and adopted an off-site data-warehousing solution based on an ASP financial with a one-time single payment per study archived. Different financial models for long-term data archive services were compared to the traditional capital/operational costs of on-site digital archives. Total cost of ownership (TCO), including direct and indirect expenses and savings, were compared for each model. Financial parameters were considered: logistic/operational advantages and disadvantages of ASP models versus traditional archiving systems. Our initial analysis demonstrated that the traditional linear ASP business model for data storage was unsuitable for large institutions. The overall cost markedly exceeds the TCO of an in-house archive infrastructure (when support and maintenance costs are included.) We demonstrated, however, that non-linear ASP pricing models can be cost-effective alternatives for large-scale data storage, particularly if they are based on a scalable off-site data-warehousing service and the prices are adapted to the specific size of a given institution. The added value of ASP is that it does not require iterative data migrations from legacy media to new storage media at regular intervals.

  12. Comprehensive Reproductive System Care Program - Clinical Breast Care Project (CRSCP-CBCP)

    DTIC Science & Technology

    2006-09-01

    policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden...on those results. 5 W81 XWH-05-2-0053 Principal Investigator: Craig D. Shriver, COL MC Summary of the methodology of the project. The five pillars...exploration of warehoused data from an individual patient. An application prototype has been developed to enable users to access clinical or experimental

  13. Cold-Chain Logistics: A Study of the Department of the Defense OCONUS Pre-Pandemic Influenza Vaccine Distribution Network

    DTIC Science & Technology

    2007-12-01

    AUTHOR( S ) LT Daniel “Travis” Jones LT Christopher “Craig” Tecmire 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ...NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are...merchandise, and not necessarily in the warehousing and the internal logistics associated with the production. The typical logistics manager of the 70’ s and

  14. Bioinformatics strategies in life sciences: from data processing and data warehousing to biological knowledge extraction.

    PubMed

    Thiele, Herbert; Glandorf, Jörg; Hufnagel, Peter

    2010-05-27

    With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScape bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists; current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor) that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.

  15. Data integration and warehousing: coordination between newborn screening and related public health programs.

    PubMed

    Therrell, Bradford L

    2003-01-01

    At birth, patient demographic and health information begin to accumulate in varied databases. There are often multiple sources of the same or similar data. New public health programs are often created without considering data linkages. Recently, newborn hearing screening (NHS) programs and immunization programs have virtually ignored the existence of newborn dried blood spot (DBS) newborn screening databases containing similar demographic data, creating data duplication in their 'new' systems. Some progressive public health departments are developing data warehouses of basic, recurrent patient information, and linking these databases to other health program databases where programs and services can benefit from such linkages. Demographic data warehousing saves time (and money) by eliminating duplicative data entry and reducing the chances of data errors. While newborn screening data are usually the first data available, they should not be the only data source considered for early data linkage or for populating a data warehouse. Birth certificate information should also be considered along with other data sources for infants that may not have received newborn screening or who may have been born outside of the jurisdiction and not have birth certificate information locally available. This newborn screening serial number provides a convenient identification number for use in the DBS program and for linking with other systems. As a minimum, data linkages should exist between newborn dried blood spot screening, newborn hearing screening, immunizations, birth certificates and birth defect registries.

  16. Use of synthesized data to support complex ad-hoc queries in an enterprise information warehouse: a diabetes use case.

    PubMed

    Rogers, Patrick; Erdal, Selnur; Santangelo, Jennifer; Liu, Jianhua; Schuster, Dara; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) is a comprehensive data warehousing facility incorporating operational, clinical, and biological data sets from multiple enterprise system. It is common for users of the IW to request complex ad-hoc queries that often require significant intervention by data analyst. In response to this challenge, we have designed a workflow that leverages synthesized data elements to support such queries in an more timely, efficient manner.

  17. Energy cost reduction in retailing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The handbook was developed to help retail store owners cut the cost of energy in their businesses. It shows how to recognize and act on energy waste in interior and outdoor lighting, space heating, air conditioning and ventilation, general maintenance, warehousing, delivery, and refrigeration. Energy use in retail stores is significant because of the importance of environmental control, the role of lighting in merchandising, and long hours of operation. A 20 to 30% net cost reduction is possible by applying the recommendations in this handbook.

  18. 'There is no turning back'--is AFIP facing demise?

    PubMed

    Seckinger, Daniel

    2005-08-01

    The U.S. Department of Defense has recommended that Walter Reed Army Medical Center, home of the Armed Forces Institute of Pathology, be relocated and that the AFIP's constituent parts be eliminated or, in the case of its tissue repository, warehoused. CAP past resident Daniel Seckinger, MD, chairman of the board of the American Registry of athology, testified July 7 before the Defense Base Closure and Realignment Commission its public hearing on military base closings. His statement appears here.

  19. Environmental Assessment of Interim Flight Training Authority at Airfields in the Northeast

    DTIC Science & Technology

    2006-09-01

    Transportation and warehousing, and utilities 3.2% 4.5% 5.5% Information 2.0% 2.7% 4.1% Finance , insurance, real estate, and rental and leasing 3.9% 3.4...3.8% 3.3% Finance , insurance, real estate, and rental and leasing 11.1% 6.6% 14.2% Professional, scientific, management, administrative, and waste...the Classified Supervisor of the Johnson Newspaper Corp., a corporation duly organized and existing under the laws of the State of New York, and

  20. Web-based multimedia information retrieval for clinical application research

    NASA Astrophysics Data System (ADS)

    Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.

    2001-08-01

    We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.

  1. Abstracting data warehousing issues in scientific research.

    PubMed

    Tews, Cody; Bracio, Boris R

    2002-01-01

    This paper presents the design and implementation of the Idaho Biomedical Data Management System (IBDMS). This system preprocesses biomedical data from the IMPROVE (Improving Control of Patient Status in Critical Care) library via an Open Database Connectivity (ODBC) connection. The ODBC connection allows for local and remote simulations to access filtered, joined, and sorted data using the Structured Query Language (SQL). The tool is capable of providing an overview of available data in addition to user defined data subset for verification of models of the human respiratory system.

  2. Data key to quest for quality.

    PubMed

    Chang, Florence S; Nielsen, Jon; Macias, Charles

    2013-11-01

    Late-binding data warehousing reduces the time it takes to obtain data needed to make crucial decisions. Late binding refers to when and how tightly data from the source applications are bound to the rules and vocabularies that make it useful. In some cases, data can be seen in real time. In historically paper-driven environments where data-driven decisions may be a new concept, buy-in from clinicians, physicians, and hospital leaders is key to success in using data to improve outcomes.

  3. [Informatics data quality and management].

    PubMed

    Feng, Rung-Chuang

    2009-06-01

    While the quality of data affects every aspect of business, it is frequently overlooked in terms of customer data integration, data warehousing, business intelligence and enterprise applications. Regardless of which data terms are used, a high level of data quality is a critical base condition essential to satisfy user needs and facilitate the development of effective applications. In this paper, the author introduces methods, a management framework and the major factors involved in data quality assessment. Author also integrates expert opinions to develop data quality assessment tools.

  4. The Hydrologic Instrumentation Facility of the U.S. Geological Survey

    USGS Publications Warehouse

    Wagner, C.R.; Jeffers, Sharon

    1984-01-01

    The U.S. Geological Survey Water Resources Division has improved support to the agencies field offices by the consolidation of all instrumentation support services in a single facility. This facility known as the Hydrologic Instrumentation Facility (HIF) is located at the National Space Technology Laboratory, Mississippi, about 50 miles east of New Orleans, Louisiana. The HIF is responsible for design and development, testing, evaluation, procurement, warehousing, distribution and repair of a variety of specialized hydrologic instrumentation. The centralization has resulted in more efficient and effective support of the Survey 's hydrologic programs. (USGS)

  5. Problem Areas in Data Warehousing and Data Mining in a Surgical Clinic

    PubMed Central

    Tusch, Guenter; Mueller, Margarete; Rohwer-Mensching, Katrin; Heiringhoff, Karlheinz; Klempnauer, Juergen

    2001-01-01

    Hospitals and clinics have taken advantage of information systems to streamline many clinical and administrative processes. However, the potential of health care information technology as a source of data for clinical and administrative decision support has not been fully explored. In response to pressure for timely information, many hospitals are developing clinical data warehouses. This paper attempts to identify problem areas in the process of developing a data warehouse to support data mining in surgery. Based on the experience from a data warehouse in surgery several solutions are discussed.

  6. Internet-based data warehousing

    NASA Astrophysics Data System (ADS)

    Boreisha, Yurii

    2001-10-01

    In this paper, we consider the process of the data warehouse creation and population using the latest Internet and database access technologies. The logical three-tier model is applied. This approach allows developing of an enterprise schema by analyzing the various processes in the organization, and extracting the relevant entities and relationships from them. Integration with local schemas and population of the data warehouse is done through the corresponding user, business, and data services components. The hierarchy of these components is used to hide from the data warehouse users the entire complex online analytical processing functionality.

  7. A Conceptual Model for Multidimensional Analysis of Documents

    NASA Astrophysics Data System (ADS)

    Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles

    Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.

  8. Managing data quality in an existing medical data warehouse using business intelligence technologies.

    PubMed

    Eaton, Scott; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) is a comprehensive data warehousing facility that provides providing data integration, management, mining, training, and development services to a diversity of customers across the clinical, education, and research sectors of the OSUMC. Providing accurate and complete data is a must for these purposes. In order to monitor the data quality of targeted data sets, an online scorecard has been developed to allow visualization of the critical measures of data quality in the Information Warehouse.

  9. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  10. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  11. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  12. Risk Factors, Health Behaviors, and Injury Among Adults Employed in the Transportation, Warehousing, and Utilities Super Sector

    PubMed Central

    Helmkamp, James C.; Lincoln, Jennifer E.; Sestito, John; Wood, Eric; Birdsey, Jan; Kiefer, Max

    2015-01-01

    Background The TWU super sector is engaged in the movement of passengers and cargo, warehousing of goods, and the delivery of services. The purpose of this study is to describe employee self-reported personal risk factors, health behaviors and habits, disease and chronic conditions, and employer-reported nonfatal injury experiences of workers in the TWU super sector. Methods National Health Interview Survey (NHIS) data for 1997–2007, grouped into six morbidity and disability categories and three age groups, were reviewed. Demographic characteristics and prevalence estimates are reported for workers in the TWU super sector and the entire U.S. workforce, and compared with national adult population data from the NHIS. Bureau of Labor Statistics employer-reported TWU injury data from 2003 to 2007 was also reviewed. Results An average of 8.3 million workers were employed annually in the TWU super sector. TWU workers 65 or older reported the highest prevalence of hypertension (49%) across all industry sectors, but the 20% prevalence is notable among middle age workers (25–64). TWU workers had the highest prevalence of obesity (28%), compared to workers in all other industry sectors. Female TWU workers experienced the highest number of lost workdays (6.5) in the past year across all TWU demographic groups. Conclusions Self-reported high proportions of chronic conditions including hypertension and heart disease combined with elevated levels of being overweight and obese, and lack of physical activity—particularly among TWUs oldest workers—can meaningfully inform wellness strategies and interventions focused on this demographic group. PMID:23255331

  13. Guidelines and Data to Support Plans for Reallocating Food during Crisis Relocation. Regional Appendix. FEMA Region VI,

    DTIC Science & Technology

    1982-09-01

    POPLAR BLUFF MO 2 1 0 0 0 2,2-50 JONESBORO GROCER CO JOIES3ORO AR 1 I 0 0 0 0 7-0 MALONE AND HIDE INC MEMPHIS TN 4 2 I I 0 0 6,000 WETTERAU FOODS INC SCOTT...five states in FEMA Region VI ( Louisiana, Oklahoma and Texas) are largely P self-sufficient, warehousing between 75% to 97% of their food supplies...within their own borders. New Mexico and Arkansas are heavily dependent on out-of-state warehouses for food shipments, with less than one-third of their

  14. New Challenges in Information Integration

    NASA Astrophysics Data System (ADS)

    Haas, Laura M.; Soffer, Aya

    Information integration is the cornerstone of modern business informatics. It is a pervasive problem; rarely is a new application built without an initial phase of gathering and integrating information. Information integration comes in a wide variety of forms. Historically, two major approaches were recognized: data federation and data warehousing. Today, we need new approaches, as information integration becomes more dynamic, while coping with growing volumes of increasingly dirty and diverse data. At the same time, information integration must be coupled more tightly with the applications and the analytics that will leverage the integrated results, to make the integration process more tractable and the results more consumable.

  15. PRMS Data Warehousing Prototype

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2001-01-01

    Project and Resource Management System (PRMS) is a web-based, mid-level management tool developed at KSC to provide a unified enterprise framework for Project and Mission management. The addition of a data warehouse as a strategic component to the PRMS is investigated through the analysis design and implementation processes of a data warehouse prototype. As a proof of concept, a demonstration of the prototype with its OLAP's technology for multidimensional data analysis is made. The results of the data analysis and the design constraints are discussed. The prototype can be used to motivate interest and support for an operational data warehouse.

  16. Holographic optical disc

    NASA Astrophysics Data System (ADS)

    Zhou, Gan; An, Xin; Pu, Allen; Psaltis, Demetri; Mok, Fai H.

    1999-11-01

    The holographic disc is a high capacity, disk-based data storage device that can provide the performance for next generation mass data storage needs. With a projected capacity approaching 1 terabit on a single 12 cm platter, the holographic disc has the potential to become a highly efficient storage hardware for data warehousing applications. The high readout rate of holographic disc makes it especially suitable for generating multiple, high bandwidth data streams such as required for network server computers. Multimedia applications such as interactive video and HDTV can also potentially benefit from the high capacity and fast data access of holographic memory.

  17. PRMS Data Warehousing Prototype

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2002-01-01

    Project and Resource Management System (PRMS) is a web-based, mid-level management tool developed at KSC to provide a unified enterprise framework for Project and Mission management. The addition of a data warehouse as a strategic component to the PRMS is investigated through the analysis, design and implementation processes of a data warehouse prototype. As a proof of concept, a demonstration of the prototype with its OLAP's technology for multidimensional data analysis is made. The results of the data analysis and the design constraints are discussed. The prototype can be used to motivate interest and support for an operational data warehouse.

  18. Data warehousing as a healthcare business solution.

    PubMed

    Scheese, R

    1998-02-01

    Because of the trend toward consolidation in the healthcare field, many organizations have massive amounts of data stored in various information systems organizationwide, but access to the data by end users may be difficult. Healthcare organizations are being pressured to provide managers easy access to the data needed for critical decision making. One solution many organizations are turning to is implementing decision-support data warehouses. A data warehouse instantly delivers information directly to end users, freeing healthcare information systems staff for strategic operations. If designed appropriately, data warehouses can be a cost-effective tool for business analysis and decision support.

  19. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  20. Cloud Based Web 3d GIS Taiwan Platform

    NASA Astrophysics Data System (ADS)

    Tsai, W.-F.; Chang, J.-Y.; Yan, S. Y.; Chen, B.

    2011-09-01

    This article presents the status of the web 3D GIS platform, which has been developed in the National Applied Research Laboratories. The purpose is to develop a global earth observation 3D GIS platform for applications to disaster monitoring and assessment in Taiwan. For quick response to preliminary and detailed assessment after a natural disaster occurs, the web 3D GIS platform is useful to access, transfer, integrate, display and analyze the multi-scale huge data following the international OGC standard. The framework of cloud service for data warehousing management and efficiency enhancement using VMWare is illustrated in this article.

  1. The Stanford Data Miner: a novel approach for integrating and exploring heterogeneous immunological data.

    PubMed

    Siebert, Janet C; Munsil, Wes; Rosenberg-Hasson, Yael; Davis, Mark M; Maecker, Holden T

    2012-03-28

    Systems-level approaches are increasingly common in both murine and human translational studies. These approaches employ multiple high information content assays. As a result, there is a need for tools to integrate heterogeneous types of laboratory and clinical/demographic data, and to allow the exploration of that data by aggregating and/or segregating results based on particular variables (e.g., mean cytokine levels by age and gender). Here we describe the application of standard data warehousing tools to create a novel environment for user-driven upload, integration, and exploration of heterogeneous data. The system presented here currently supports flow cytometry and immunoassays performed in the Stanford Human Immune Monitoring Center, but could be applied more generally. Users upload assay results contained in platform-specific spreadsheets of a defined format, and clinical and demographic data in spreadsheets of flexible format. Users then map sample IDs to connect the assay results with the metadata. An OLAP (on-line analytical processing) data exploration interface allows filtering and display of various dimensions (e.g., Luminex analytes in rows, treatment group in columns, filtered on a particular study). Statistics such as mean, median, and N can be displayed. The views can be expanded or contracted to aggregate or segregate data at various levels. Individual-level data is accessible with a single click. The result is a user-driven system that permits data integration and exploration in a variety of settings. We show how the system can be used to find gender-specific differences in serum cytokine levels, and compare them across experiments and assay types. We have used the tools and techniques of data warehousing, including open-source business intelligence software, to support investigator-driven data integration and mining of diverse immunological data.

  2. Classification of time series patterns from complex dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less

  3. An i2b2-based, generalizable, open source, self-scaling chronic disease registry

    PubMed Central

    Quan, Justin; Ortiz, David M; Bousvaros, Athos; Ilowite, Norman T; Inman, Christi J; Marsolo, Keith; McMurry, Andrew J; Sandborg, Christy I; Schanberg, Laura E; Wallace, Carol A; Warren, Robert W; Weber, Griffin M; Mandl, Kenneth D

    2013-01-01

    Objective Registries are a well-established mechanism for obtaining high quality, disease-specific data, but are often highly project-specific in their design, implementation, and policies for data use. In contrast to the conventional model of centralized data contribution, warehousing, and control, we design a self-scaling registry technology for collaborative data sharing, based upon the widely adopted Integrating Biology & the Bedside (i2b2) data warehousing framework and the Shared Health Research Information Network (SHRINE) peer-to-peer networking software. Materials and methods Focusing our design around creation of a scalable solution for collaboration within multi-site disease registries, we leverage the i2b2 and SHRINE open source software to create a modular, ontology-based, federated infrastructure that provides research investigators full ownership and access to their contributed data while supporting permissioned yet robust data sharing. We accomplish these objectives via web services supporting peer-group overlays, group-aware data aggregation, and administrative functions. Results The 56-site Childhood Arthritis & Rheumatology Research Alliance (CARRA) Registry and 3-site Harvard Inflammatory Bowel Diseases Longitudinal Data Repository now utilize i2b2 self-scaling registry technology (i2b2-SSR). This platform, extensible to federation of multiple projects within and between research networks, encompasses >6000 subjects at sites throughout the USA. Discussion We utilize the i2b2-SSR platform to minimize technical barriers to collaboration while enabling fine-grained control over data sharing. Conclusions The implementation of i2b2-SSR for the multi-site, multi-stakeholder CARRA Registry has established a digital infrastructure for community-driven research data sharing in pediatric rheumatology in the USA. We envision i2b2-SSR as a scalable, reusable solution facilitating interdisciplinary research across diseases. PMID:22733975

  4. Storage assignment optimization in a multi-tier shuttle warehousing system

    NASA Astrophysics Data System (ADS)

    Wang, Yanyan; Mou, Shandong; Wu, Yaohua

    2016-03-01

    The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.

  5. The Stanford Data Miner: a novel approach for integrating and exploring heterogeneous immunological data

    PubMed Central

    2012-01-01

    Background Systems-level approaches are increasingly common in both murine and human translational studies. These approaches employ multiple high information content assays. As a result, there is a need for tools to integrate heterogeneous types of laboratory and clinical/demographic data, and to allow the exploration of that data by aggregating and/or segregating results based on particular variables (e.g., mean cytokine levels by age and gender). Methods Here we describe the application of standard data warehousing tools to create a novel environment for user-driven upload, integration, and exploration of heterogeneous data. The system presented here currently supports flow cytometry and immunoassays performed in the Stanford Human Immune Monitoring Center, but could be applied more generally. Results Users upload assay results contained in platform-specific spreadsheets of a defined format, and clinical and demographic data in spreadsheets of flexible format. Users then map sample IDs to connect the assay results with the metadata. An OLAP (on-line analytical processing) data exploration interface allows filtering and display of various dimensions (e.g., Luminex analytes in rows, treatment group in columns, filtered on a particular study). Statistics such as mean, median, and N can be displayed. The views can be expanded or contracted to aggregate or segregate data at various levels. Individual-level data is accessible with a single click. The result is a user-driven system that permits data integration and exploration in a variety of settings. We show how the system can be used to find gender-specific differences in serum cytokine levels, and compare them across experiments and assay types. Conclusions We have used the tools and techniques of data warehousing, including open-source business intelligence software, to support investigator-driven data integration and mining of diverse immunological data. PMID:22452993

  6. An i2b2-based, generalizable, open source, self-scaling chronic disease registry.

    PubMed

    Natter, Marc D; Quan, Justin; Ortiz, David M; Bousvaros, Athos; Ilowite, Norman T; Inman, Christi J; Marsolo, Keith; McMurry, Andrew J; Sandborg, Christy I; Schanberg, Laura E; Wallace, Carol A; Warren, Robert W; Weber, Griffin M; Mandl, Kenneth D

    2013-01-01

    Registries are a well-established mechanism for obtaining high quality, disease-specific data, but are often highly project-specific in their design, implementation, and policies for data use. In contrast to the conventional model of centralized data contribution, warehousing, and control, we design a self-scaling registry technology for collaborative data sharing, based upon the widely adopted Integrating Biology & the Bedside (i2b2) data warehousing framework and the Shared Health Research Information Network (SHRINE) peer-to-peer networking software. Focusing our design around creation of a scalable solution for collaboration within multi-site disease registries, we leverage the i2b2 and SHRINE open source software to create a modular, ontology-based, federated infrastructure that provides research investigators full ownership and access to their contributed data while supporting permissioned yet robust data sharing. We accomplish these objectives via web services supporting peer-group overlays, group-aware data aggregation, and administrative functions. The 56-site Childhood Arthritis & Rheumatology Research Alliance (CARRA) Registry and 3-site Harvard Inflammatory Bowel Diseases Longitudinal Data Repository now utilize i2b2 self-scaling registry technology (i2b2-SSR). This platform, extensible to federation of multiple projects within and between research networks, encompasses >6000 subjects at sites throughout the USA. We utilize the i2b2-SSR platform to minimize technical barriers to collaboration while enabling fine-grained control over data sharing. The implementation of i2b2-SSR for the multi-site, multi-stakeholder CARRA Registry has established a digital infrastructure for community-driven research data sharing in pediatric rheumatology in the USA. We envision i2b2-SSR as a scalable, reusable solution facilitating interdisciplinary research across diseases.

  7. Risk factors, health behaviors, and injury among adults employed in the transportation, warehousing, and utilities super sector.

    PubMed

    Helmkamp, James C; Lincoln, Jennifer E; Sestito, John; Wood, Eric; Birdsey, Jan; Kiefer, Max

    2013-05-01

    The TWU super sector is engaged in the movement of passengers and cargo, warehousing of goods, and the delivery of services. The purpose of this study is to describe employee self-reported personal risk factors, health behaviors and habits, disease and chronic conditions, and employer-reported nonfatal injury experiences of workers in the TWU super sector. National Health Interview Survey (NHIS) data for 1997-2007, grouped into six morbidity and disability categories and three age groups, were reviewed. Demographic characteristics and prevalence estimates are reported for workers in the TWU super sector and the entire U.S. workforce, and compared with national adult population data from the NHIS. Bureau of Labor Statistics employer-reported TWU injury data from 2003 to 2007 was also reviewed. An average of 8.3 million workers were employed annually in the TWU super sector. TWU workers 65 or older reported the highest prevalence of hypertension (49%) across all industry sectors, but the 20% prevalence is notable among middle age workers (25-64). TWU workers had the highest prevalence of obesity (28%), compared to workers in all other industry sectors. Female TWU workers experienced the highest number of lost workdays (6.5) in the past year across all TWU demographic groups. Self-reported high proportions of chronic conditions including hypertension and heart disease combined with elevated levels of being overweight and obese, and lack of physical activity-particularly among TWUs oldest workers-can meaningfully inform wellness strategies and interventions focused on this demographic group. Am. J. Ind. Med. 56:556-568, 2013. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  8. Modeling biology using relational databases.

    PubMed

    Peitzsch, Robert M

    2003-02-01

    There are several different methodologies that can be used for designing a database schema; no one is the best for all occasions. This unit demonstrates two different techniques for designing relational tables and discusses when each should be used. These two techniques presented are (1) traditional Entity-Relationship (E-R) modeling and (2) a hybrid method that combines aspects of data warehousing and E-R modeling. The method of choice depends on (1) how well the information and all its inherent relationships are understood, (2) what types of questions will be asked, (3) how many different types of data will be included, and (4) how much data exists.

  9. Storage Information Management System (SIMS) Spaceflight Hardware Warehousing at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Kubicko, Richard M.; Bingham, Lindy

    1995-01-01

    Goddard Space Flight Center (GSFC) on site and leased warehouses contain thousands of items of ground support equipment (GSE) and flight hardware including spacecraft, scaffolding, computer racks, stands, holding fixtures, test equipment, spares, etc. The control of these warehouses, and the management, accountability, and control of the items within them, is accomplished by the Logistics Management Division. To facilitate this management and tracking effort, the Logistics and Transportation Management Branch, is developing a system to provide warehouse personnel, property owners, and managers with storage and inventory information. This paper will describe that PC-based system and address how it will improve GSFC warehouse and storage management.

  10. Business intelligence tools for radiology: creating a prototype model using open-source tools.

    PubMed

    Prevedello, Luciano M; Andriole, Katherine P; Hanson, Richard; Kelly, Pauline; Khorasani, Ramin

    2010-04-01

    Digital radiology departments could benefit from the ability to integrate and visualize data (e.g. information reflecting complex workflow states) from all of their imaging and information management systems in one composite presentation view. Leveraging data warehousing tools developed in the business world may be one way to achieve this capability. In total, the concept of managing the information available in this data repository is known as Business Intelligence or BI. This paper describes the concepts used in Business Intelligence, their importance to modern Radiology, and the steps used in the creation of a prototype model of a data warehouse for BI using open-source tools.

  11. The clinical value of large neuroimaging data sets in Alzheimer's disease.

    PubMed

    Toga, Arthur W

    2012-02-01

    Rapid advances in neuroimaging and cyberinfrastructure technologies have brought explosive growth in the Web-based warehousing, availability, and accessibility of imaging data on a variety of neurodegenerative and neuropsychiatric disorders and conditions. There has been a prolific development and emergence of complex computational infrastructures that serve as repositories of databases and provide critical functionalities such as sophisticated image analysis algorithm pipelines and powerful three-dimensional visualization and statistical tools. The statistical and operational advantages of collaborative, distributed team science in the form of multisite consortia push this approach in a diverse range of population-based investigations. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  13. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  14. Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wu, Yan

    2018-03-01

    Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.

  15. The application of volume-outcome contouring in data warehousing.

    PubMed

    Studnicki, James; Berndt, Donald J; Luther, Stephen L; Fisher, John W

    2004-01-01

    Despite a compelling body of published research on the nature of provider volume and clinical outcomes, healthcare executives and policymakers have not managed to develop and implement systems that are useful in directing patients to higher volume providers via selective referral or avoidance. A specialized data warehouse application, utilizing hospital discharge data linked to physician biographical information, allows detailed analysis of physician and hospital volume and the resulting pattern (contour) of related outcomes such as mortality, complications, and medical errors. The approach utilizes a historical repository of hospital discharge data in which the outcomes of interest, important patient characteristics and risk factors used in severity-adjusting of the outcomes are derived from the coding structure of the data.

  16. Beliefs and Attitudes Associated with ERP Adoption Behaviours: A Grounded Theory Study from IT Manager and End-user Perspectives

    NASA Astrophysics Data System (ADS)

    Arunthari, Santipat; Hasan, Helen

    (1998, p. 121) defines an Enterprise Resource Planning (ERP) system as an enterprise system that promises seamless integration of all information flowing through a company, including financial and accounting information, human resource information, supply chain information, customer information. ERP systems came on the scene in the early 1990s as a response to the proliferation of standalone business applications to service these separate information needs in most large organisations. Enterprise wide projects, such as data warehousing, requiring integrated approaches to organisational operations and information management were inhibited through a proliferation of incompatible off-the-shelf packages, in-house developments and aging legacy systems.

  17. Real-Time Pattern Recognition - An Industrial Example

    NASA Astrophysics Data System (ADS)

    Fitton, Gary M.

    1981-11-01

    Rapid advancements in cost effective sensors and micro computers are now making practical the on-line implementation of pattern recognition based systems for a variety of industrial applications requiring high processing speeds. One major application area for real time pattern recognition is in the sorting of packaged/cartoned goods at high speed for automated warehousing and return goods cataloging. While there are many OCR and bar code readers available to perform these functions, it is often impractical to use such codes (package too small, adverse esthetics, poor print quality) and an approach which recognizes an item by its graphic content alone is desirable. This paper describes a specific application within the tobacco industry, that of sorting returned cigarette goods by brand and size.

  18. Merging OLTP and OLAP - Back to the Future

    NASA Astrophysics Data System (ADS)

    Lehner, Wolfgang

    When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.

  19. An Optimization Model for Scheduling Problems with Two-Dimensional Spatial Resource Constraint

    NASA Technical Reports Server (NTRS)

    Garcia, Christopher; Rabadi, Ghaith

    2010-01-01

    Traditional scheduling problems involve determining temporal assignments for a set of jobs in order to optimize some objective. Some scheduling problems also require the use of limited resources, which adds another dimension of complexity. In this paper we introduce a spatial resource-constrained scheduling problem that can arise in assembly, warehousing, cross-docking, inventory management, and other areas of logistics and supply chain management. This scheduling problem involves a twodimensional rectangular area as a limited resource. Each job, in addition to having temporal requirements, has a width and a height and utilizes a certain amount of space inside the area. We propose an optimization model for scheduling the jobs while respecting all temporal and spatial constraints.

  20. Medical data mining: knowledge discovery in a clinical data warehouse.

    PubMed Central

    Prather, J. C.; Lobach, D. F.; Goodwin, L. K.; Hales, J. W.; Hage, M. L.; Hammond, W. E.

    1997-01-01

    Clinical databases have accumulated large quantities of information about patients and their medical conditions. Relationships and patterns within this data could provide new medical knowledge. Unfortunately, few methodologies have been developed and applied to discover this hidden knowledge. In this study, the techniques of data mining (also known as Knowledge Discovery in Databases) were used to search for relationships in a large clinical database. Specifically, data accumulated on 3,902 obstetrical patients were evaluated for factors potentially contributing to preterm birth using exploratory factor analysis. Three factors were identified by the investigators for further exploration. This paper describes the processes involved in mining a clinical database including data warehousing, data query and cleaning, and data analysis. PMID:9357597

  1. A U.S. Partnership with India and Poland to Track Acute Chemical Releases to Serve Public Health

    PubMed Central

    Ruckart, Perri Zeitz; Orr, Maureen; Pałaszewska-Tkacz, Anna; Dewan, Aruna; Kapil, Vikas

    2009-01-01

    We describe a collaborative effort between the U.S., India, and Poland to track acute chemical releases during 2005–2007. In all three countries, fixed facility events were more common than transportation-related events; manufacturing and transportation/warehousing were the most frequently involved industries; and equipment failure and human error were the primary contributing factors. The most commonly released nonpetroleum substances were ammonia (India), carbon monoxide (U.S.) and mercury (Poland). More events in India (54%) resulted in victims compared with Poland (15%) and the U.S. (9%). The pilot program showed it is possible to successfully conduct international surveillance of acute hazardous substances releases with careful interpretation of the findings. PMID:19826549

  2. Semantic Technologies for Re-Use of Clinical Routine Data.

    PubMed

    Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan

    2017-01-01

    Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.

  3. Biomedical data integration in computational drug design and bioinformatics.

    PubMed

    Seoane, Jose A; Aguiar-Pulido, Vanessa; Munteanu, Cristian R; Rivero, Daniel; Rabunal, Juan R; Dorado, Julian; Pazos, Alejandro

    2013-03-01

    In recent years, in the post genomic era, more and more data is being generated by biological high throughput technologies, such as proteomics and transcriptomics. This omics data can be very useful, but the real challenge is to analyze all this data, as a whole, after integrating it. Biomedical data integration enables making queries to different, heterogeneous and distributed biomedical data sources. Data integration solutions can be very useful not only in the context of drug design, but also in biomedical information retrieval, clinical diagnosis, system biology, etc. In this review, we analyze the most common approaches to biomedical data integration, such as federated databases, data warehousing, multi-agent systems and semantic technology, as well as the solutions developed using these approaches in the past few years.

  4. Study on Full Supply Chain Quality and Safetytraceability Systems For Cereal And Oilproducts

    NASA Astrophysics Data System (ADS)

    Liu, Shihong; Zheng, Huoguo; Meng, Hong; Hu, Haiyan; Wu, Jiangshou; Li, Chunhua

    Global food industry and Governments in many countries are putting increasing emphasis on establishment of food traceability systems. Food traceability has become an effective way in food safety management. Aimed at the major quality problems of cereal and oil products existing in the production, processing, warehousing, distribution and other links in the supply chain, this paper firstly proposes a new traceability framework combines the information flow with critical control points and quality indicators. Then it introduces traceability database design and data access mode to realize the framework. In practice, Code design for tracing goods is a challenge thing, so this paper put forward a code system based on UCC/EAN-128 standard.Middleware and Electronic terminal design are also briefly introduced to accomplish traceability system for cereal and oil products.

  5. The balanced scorecard: an integrative approach to performance evaluation.

    PubMed

    Oliveira, J

    2001-05-01

    In addition to strict financial outcomes, healthcare financial managers should assess intangible assets that affect the organization's bottom line, such as clinical processes, staff skills, and patient satisfaction and loyalty. The balanced scorecard, coupled with data-warehousing capabilities, offers a way to measure an organization's performance against its strategic objectives while focusing on building capabilities to achieve these objectives. The balanced scorecard examines performance related to finance, human resources, internal processes, and customers. Because the balanced scorecard requires substantial amounts of data, it is a necessity to establish an organizational data warehouse of clinical, operational, and financial data that can be used in decision support. Because it presents indicators that managers and staff can influence directly by their actions, the balanced-scorecard approach to performance measurement encourages behavioral changes aimed at achieving corporate strategies.

  6. Empowering Mayo Clinic Individualized Medicine with Genomic Data Warehousing

    PubMed Central

    Horton, Iain; Lin, Yaxiong; Reed, Gay; Wiepert, Mathieu

    2017-01-01

    Individualized medicine enables better diagnoses and treatment decisions for patients and promotes research in understanding the molecular underpinnings of disease. Linking individual patient’s genomic and molecular information with their clinical phenotypes is crucial to these efforts. To address this need, the Center for Individualized Medicine at Mayo Clinic has implemented a genomic data warehouse and a workflow management system to bring data from institutional electronic health records and genomic sequencing data from both clinical and research bioinformatics sources into the warehouse. The system is the foundation for Mayo Clinic to build a suite of tools and interfaces to support various clinical and research use cases. The genomic data warehouse is positioned to play a key role in enhancing the research capabilities and advancing individualized patient care at Mayo Clinic. PMID:28829408

  7. Empowering Mayo Clinic Individualized Medicine with Genomic Data Warehousing.

    PubMed

    Horton, Iain; Lin, Yaxiong; Reed, Gay; Wiepert, Mathieu; Hart, Steven

    2017-08-22

    Individualized medicine enables better diagnoses and treatment decisions for patients and promotes research in understanding the molecular underpinnings of disease. Linking individual patient's genomic and molecular information with their clinical phenotypes is crucial to these efforts. To address this need, the Center for Individualized Medicine at Mayo Clinic has implemented a genomic data warehouse and a workflow management system to bring data from institutional electronic health records and genomic sequencing data from both clinical and research bioinformatics sources into the warehouse. The system is the foundation for Mayo Clinic to build a suite of tools and interfaces to support various clinical and research use cases. The genomic data warehouse is positioned to play a key role in enhancing the research capabilities and advancing individualized patient care at Mayo Clinic.

  8. Knowledge Management in Sensor Enabled Online Services

    NASA Astrophysics Data System (ADS)

    Smyth, Dominick; Cappellari, Paolo; Roantree, Mark

    The Future Internet, has as its vision, the development of improved features and usability for services, applications and content. In many cases, services can be provided automatically through the use of monitors or sensors. This means web generated sensor data becoming available not only to the companies that own the sensors but also to the domain users who generate the data and to information and knowledge workers who harvest the output. The goal is improving the service through better usage of the information provided by the service. Applications and services vary from climate, traffic, health and sports event monitoring. In this paper, we present the WSW system that harvests web sensor data to provide additional and, in some cases, more accurate information using an analysis of both live and warehoused information.

  9. Occupational Safety and Health in the Temporary Services Industry: A Model for a Community-University Partnership.

    PubMed

    Bonney, Tessa; Forst, Linda; Rivers, Samara; Love, Marsha; Pratap, Preethi; Bell, Tim; Fulkerson, Sean

    2017-08-01

    Workers in the temporary staffing industry face hazardous working conditions and have a high risk of occupational injury. This project brought together local workers' centers and university investigators to build a corps of Occupational Health Promoters (OHPs) and to test a survey tool and recruitment methods to identify hazards and raise awareness among workers employed by temporary staffing companies. OHPs interviewed ninety-eight workers employed by thirty-three temporary agencies and forty-nine client companies, working mainly in shipping and packing, manufacturing, and warehousing sectors. Surveys identified workplace hazards. OHPs reported two companies to OSHA, resulting in several citations. Partners reported greater understanding of occupational safety and health challenges for temporary workers and continue to engage in training, peer education, and coalition building.

  10. Development of a public health reporting data warehouse: lessons learned.

    PubMed

    Rizi, Seyed Ali Mussavi; Roudsari, Abdul

    2013-01-01

    Data warehouse projects are perceived to be risky and prone to failure due to many organizational and technical challenges. However, often iterative and lengthy processes of implementation of data warehouses at an enterprise level provide an opportunity for formative evaluation of these solutions. This paper describes lessons learned from successful development and implementation of the first phase of an enterprise data warehouse to support public health surveillance at British Columbia Centre for Disease Control. Iterative and prototyping approach to development, overcoming technical challenges of extraction and integration of data from large scale clinical and ancillary systems, a novel approach to record linkage, flexible and reusable modeling of clinical data, and securing senior management support at the right time were the main factors that contributed to the success of the data warehousing project.

  11. Critical Issues in Evaluating National-Level Health Data Warehouses in LMICs: Kenya Case Study.

    PubMed

    Gesicho, Milka B; Babic, Ankica; Were, Martin C

    2017-01-01

    Low-Middle-Income-Countries (LMICs) are beginning to adopt national health data warehousing (NHDWs) for making strategic decisions and for improving health outcomes. Given the numerous challenges likely to be faced in establishment of NHDWs by LMICs, it is prudent that evaluations are done in relation to the data warehouses (DWs), in order to identify and mitigate critical issues that arise. When critic issues are not identified, DWs are prone to suboptimal implementation with compromised outcomes. Despite the fact that several publications exist on evaluating DWs, evaluations specific to health data warehouses are scanty, with almost none evaluating NHDWs more so in LMICs. This paper uses a systematic approach guided by an evaluation framework to identify critical issues to be considered in evaluating Kenya's NHDW.

  12. Biomedical data integration - capturing similarities while preserving disparities.

    PubMed

    Bianchi, Stefano; Burla, Anna; Conti, Costanza; Farkash, Ariel; Kent, Carmel; Maman, Yonatan; Shabo, Amnon

    2009-01-01

    One of the challenges of healthcare data processing, analysis and warehousing is the integration of data gathered from disparate and diverse data sources. Promoting the adoption of worldwide accepted information standards along with common terminologies and the use of technologies derived from semantic web representation, is a suitable path to achieve that. To that end, the HL7 V3 Reference Information Model (RIM) [1] has been used as the underlying information model coupled with the Web Ontology Language (OWL) [2] as the semantic data integration technology. In this paper we depict a biomedical data integration process and demonstrate how it was used for integrating various data sources, containing clinical, environmental and genomic data, within Hypergenes, a European Commission funded project exploring the Essential Hypertension [3] disease model.

  13. What Is Spatio-Temporal Data Warehousing?

    NASA Astrophysics Data System (ADS)

    Vaisman, Alejandro; Zimányi, Esteban

    In the last years, extending OLAP (On-Line Analytical Processing) systems with spatial and temporal features has attracted the attention of the GIS (Geographic Information Systems) and database communities. However, there is no a commonly agreed definition of what is a spatio-temporal data warehouse and what functionality such a data warehouse should support. Further, the solutions proposed in the literature vary considerably in the kind of data that can be represented as well as the kind of queries that can be expressed. In this paper we present a conceptual framework for defining spatio-temporal data warehouses using an extensible data type system. We also define a taxonomy of different classes of queries of increasing expressive power, and show how to express such queries using an extension of the tuple relational calculus with aggregated functions.

  14. Landsat View: Ontario, California

    NASA Image and Video Library

    2017-12-08

    Thirty-five miles due east of downtown Los Angeles lies the city of Ontario, California. In 1881 two Canadian brothers established the town, naming it after their native city. By 1891 Ontario, Calif., was incorporated as a city. The farming-based economy (olives, citrus, dairy) of the city helped it grow to 20,000 by the 1960s. Subsequently, warehousing and freight trafficking took over as the major industry and the city’s population was over 160,000 by 2010. The L.A./Ontario International Airport is now America’s 15th busiest cargo airport. In these natural color Landsat 5 images, the massive growth of the city between 1985 and 2010 can be seen. The airport, found in the southwest portion of the images, added a number of runways and large warehousing structures now dominate the once rural areas surrounding the airport. In these images vegetation is green and brown and urban structures are bright white and gray. (Note there is a large dry riverbed in the northeast corner that is also bright white, but its nonlinear appearance sets it apart visually). ---- NASA and the U.S. Department of the Interior through the U.S. Geological Survey (USGS) jointly manage Landsat, and the USGS preserves a 40-year archive of Landsat images that is freely available over the Internet. The next Landsat satellite, now known as the Landsat Data Continuity Mission (LDCM) and later to be called Landsat 8, is scheduled for launch in 2013. In honor of Landsat’s 40th anniversary in July 2012, the USGS released the LandsatLook viewer – a quick, simple way to go forward and backward in time, pulling images of anywhere in the world out of the Landsat archive. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. POPcorn: An Online Resource Providing Access to Distributed and Diverse Maize Project Data.

    PubMed

    Cannon, Ethalinda K S; Birkett, Scott M; Braun, Bremen L; Kodavali, Sateesh; Jennewein, Douglas M; Yilmaz, Alper; Antonescu, Valentin; Antonescu, Corina; Harper, Lisa C; Gardiner, Jack M; Schaeffer, Mary L; Campbell, Darwin A; Andorf, Carson M; Andorf, Destri; Lisch, Damon; Koch, Karen E; McCarty, Donald R; Quackenbush, John; Grotewold, Erich; Lushbough, Carol M; Sen, Taner Z; Lawrence, Carolyn J

    2011-01-01

    The purpose of the online resource presented here, POPcorn (Project Portal for corn), is to enhance accessibility of maize genetic and genomic resources for plant biologists. Currently, many online locations are difficult to find, some are best searched independently, and individual project websites often degrade over time-sometimes disappearing entirely. The POPcorn site makes available (1) a centralized, web-accessible resource to search and browse descriptions of ongoing maize genomics projects, (2) a single, stand-alone tool that uses web Services and minimal data warehousing to search for sequence matches in online resources of diverse offsite projects, and (3) a set of tools that enables researchers to migrate their data to the long-term model organism database for maize genetic and genomic information: MaizeGDB. Examples demonstrating POPcorn's utility are provided herein.

  16. POPcorn: An Online Resource Providing Access to Distributed and Diverse Maize Project Data

    PubMed Central

    Cannon, Ethalinda K. S.; Birkett, Scott M.; Braun, Bremen L.; Kodavali, Sateesh; Jennewein, Douglas M.; Yilmaz, Alper; Antonescu, Valentin; Antonescu, Corina; Harper, Lisa C.; Gardiner, Jack M.; Schaeffer, Mary L.; Campbell, Darwin A.; Andorf, Carson M.; Andorf, Destri; Lisch, Damon; Koch, Karen E.; McCarty, Donald R.; Quackenbush, John; Grotewold, Erich; Lushbough, Carol M.; Sen, Taner Z.; Lawrence, Carolyn J.

    2011-01-01

    The purpose of the online resource presented here, POPcorn (Project Portal for corn), is to enhance accessibility of maize genetic and genomic resources for plant biologists. Currently, many online locations are difficult to find, some are best searched independently, and individual project websites often degrade over time—sometimes disappearing entirely. The POPcorn site makes available (1) a centralized, web-accessible resource to search and browse descriptions of ongoing maize genomics projects, (2) a single, stand-alone tool that uses web Services and minimal data warehousing to search for sequence matches in online resources of diverse offsite projects, and (3) a set of tools that enables researchers to migrate their data to the long-term model organism database for maize genetic and genomic information: MaizeGDB. Examples demonstrating POPcorn's utility are provided herein. PMID:22253616

  17. A systematic review of occupational health and safety interventions with economic analyses.

    PubMed

    Tompa, Emile; Dolinschi, Roman; de Oliveira, Claire; Irvin, Emma

    2009-09-01

    We reviewed the occupational health and safety intervention literature to synthesize evidence on financial merits of such interventions. A literature search included journal databases, existing systematic reviews, and studies identified by content experts. Studies meeting inclusion criteria were assessed for quality. Evidence was synthesized within industry-intervention type clusters. We found strong evidence that ergonomic and other musculoskeletal injury prevention interventions in manufacturing and warehousing are worth undertaking in terms of their financial merits. We also found strong evidence that multisector disability management interventions are worth undertaking. While the economic evaluation of interventions in this literature warrants further expansion, we found a sufficient number of studies to identify strong, moderate, and limited evidence in certain industry-intervention clusters. The review also provided insights into how the methodological quality of economic evaluations in this literature could be improved.

  18. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  19. A simulated annealing approach for redesigning a warehouse network problem

    NASA Astrophysics Data System (ADS)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  20. Laying the Groundwork for Enterprise-Wide Medical Language Processing Services: Architecture and Process

    PubMed Central

    Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.

    2009-01-01

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830

  1. Laying the groundwork for enterprise-wide medical language processing services: architecture and process.

    PubMed

    Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S

    2009-11-14

    A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.

  2. Anchor Modeling

    NASA Astrophysics Data System (ADS)

    Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

    Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

  3. Community archiving of imaging studies

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Roys, Steven R.; Munjal, Sunita

    1996-05-01

    The quantity of image data created in a large radiology practice has long been a challenge for available archiving technology. Traditional methods ofarchiving the large quantity of films generated in radiology have relied on warehousing in remote sites, with courier delivery of film files for historical comparisons. A digital community archive, accessible via a wide area network, represents a feasible solution to the problem of archiving digital images from a busy practice. In addition, it affords a physician caring for a patient access to imaging studies performed at a variety ofhealthcare institutions without the need to repeat studies. Security problems include both network security issues in the WAN environment and access control for patient, physician and imaging center. The key obstacle to developing a community archive is currently political. Reluctance to participate in a community archive can be reduced by appropriate design of the access mechanisms.

  4. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  5. Quantitative analysis and feature recognition in 3-D microstructural data sets

    NASA Astrophysics Data System (ADS)

    Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.

    2006-12-01

    A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.

  6. Logistics support economy and efficiency through consolidation and automation

    NASA Technical Reports Server (NTRS)

    Savage, G. R.; Fontana, C. J.; Custer, J. D.

    1985-01-01

    An integrated logistics support system, which would provide routine access to space and be cost-competitive as an operational space transportation system, was planned and implemented to support the NSTS program launch-on-time goal of 95 percent. A decision was made to centralize the Shuttle logistics functions in a modern facility that would provide office and training space and an efficient warehouse area. In this warehouse, the emphasis is on automation of the storage and retrieval function, while utilizing state-of-the-art warehousing and inventory management technology. This consolidation, together with the automation capabilities being provided, will allow for more effective utilization of personnel and improved responsiveness. In addition, this facility will be the prime support for the fully integrated logistics support of the operations era NSTS and reduce the program's management, procurement, transportation, and supply costs in the operations era.

  7. Applications of biomechanics for prevention of work-related musculoskeletal disorders.

    PubMed

    Garg, Arun; Kapellusch, Jay M

    2009-01-01

    This paper summarises applications of biomechanical principles and models in industry to control musculoskeletal disorders of the low back and upper extremity. Applications of 2-D and 3-D biomechanical models to estimate compressive force on the low back, the strength requirements of jobs, application of guidelines for overhead work and application of strain index and threshold limit value to address distal upper extremity musculoskeletal disorders are presented. Several case studies applied in the railroad industry, manufacturing, healthcare and warehousing are presented. Finally, future developments needed for improved biomechanical applications in industry are discussed. The information presented will be of value to practising ergonomists to recognise how biomechanics has played a significant role in identifying causes of musculoskeletal disorders and controlling them in the workplace. In particular, the information presented will help practising ergonomists with how physical stresses can be objectively quantified.

  8. Pedestrian worker fatalities in workplace locations, Australia, 2000-2010.

    PubMed

    Kitching, Fiona; Jones, Christopher B; Ibrahim, Joseph E; Ozanne-Smith, Joan

    2014-01-01

    Pedestrian deaths of workers in Australian workplaces (1 July 2000-31 December 2010) are described using coronial and safety authority fatality databases. One hundred and fifteen deaths were identified, with the majority male (93%) and aged over 50 years (59%). Four industries predominated (85% of deaths): Agriculture, Forestry and Fishing (31%), Construction (29%), Transport, Postal and Warehousing (16%) and Manufacturing (10%). Similarly, three occupations dominated: Farmers (28%), Labourers (27%) and Machinery Operators and Drivers (25%). Common circumstantial factors (reversing machines or vehicles, driver also the pedestrian, driver's vision impeded and working accompanied) occurred in the Construction, Transport and Manufacturing industries, providing collaborative opportunities for prevention. Deaths occurring in the Agriculture industry showed different circumstantial factors, likely needing different solutions. While some effective countermeasures are known, workplace pedestrian fatalities continue to occur. Prevention strategies are needed to share known information across industries and to produce data enhancements and new knowledge.

  9. BigMouth: a multi-institutional dental data repository

    PubMed Central

    Walji, Muhammad F; Kalenderian, Elsbeth; Stark, Paul C; White, Joel M; Kookal, Krishna K; Phan, Dat; Tran, Duong; Bernstam, Elmer V; Ramoni, Rachel

    2014-01-01

    Few oral health databases are available for research and the advancement of evidence-based dentistry. In this work we developed a centralized data repository derived from electronic health records (EHRs) at four dental schools participating in the Consortium of Oral Health Research and Informatics. A multi-stakeholder committee developed a data governance framework that encouraged data sharing while allowing control of contributed data. We adopted the i2b2 data warehousing platform and mapped data from each institution to a common reference terminology. We realized that dental EHRs urgently need to adopt common terminologies. While all used the same treatment code set, only three of the four sites used a common diagnostic terminology, and there were wide discrepancies in how medical and dental histories were documented. BigMouth was successfully launched in August 2012 with data on 1.1 million patients, and made available to users at the contributing institutions. PMID:24993547

  10. Data needs for policy research on state-level health insurance markets.

    PubMed

    Simon, Kosali

    2008-01-01

    Private and public health insurance provision in the United States operates against a backdrop of 50 different regulatory environments in addition to federal rules. Through creative use of available data, a large body of research has contributed to our understanding of public policy in state health insurance markets. This research plays an important role as recent trends suggest states are taking the lead in health care reform. However, several important questions have not been answered due to lack of data. This paper identifies some of these areas, and discusses how the Agency for Healthcare Research and Quality could push the research agenda in state health insurance policy further by augmenting the market-level data available to researchers. As states consider new forms of regulation and assistance for their insurance markets, there is increased need for better warehousing and maintenance of policy databases.

  11. Eastern Rensselaer County Community Warehouse. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-02-01

    The Eastern Rensselaer County Community Warehouse (ERCCW) project was conducted to determine if reuse is a feasible waste-management technology. The project had three phases: determining the feasibility of economically collecting, warehousing, refurbishing, and selling salvageable items from transfer stations, individuals, and businesses located in a rural area; preparing an operational plan; implementing the plan. Project findings suggest that, with proper management, a warehouse for waste reuse is feasible and can be self-sustaining. It also found that reuse results in the short-term saving of landfill capacity, and can provide low-cost goods to residents of rural areas. The study concludes that reuse,more » through retailing, is a viable waste-management practice. The report has been prepared as a how-to guide for municipalities, organizations, and individuals interested in reducing landfill waste and in establishing a reuse/recycling facility. Questions to consider, resources, and an overview of the ERCCW project are provided.« less

  12. Effects of bench step exercise intervention on work ability in terms of cardiovascular risk factors and oxidative stress: a randomized controlled study.

    PubMed

    Ohta, Masanori; Eguchi, Yasumasa; Inoue, Tomohiro; Honda, Toru; Morita, Yusaku; Konno, Yoshimasa; Yamato, Hiroshi; Kumashiro, Masaharu

    2015-01-01

    Work ability is partly determined by physical and mental fitness. Bench step exercise can be practiced anywhere at any time. The aim of this study was to determine the effects of a bench step exercise on work ability by examining cardiovascular risk factors and oxidative stress. Thirteen volunteers working in a warehousing industry comprised the bench step exercise group (n=7) and the control group (n=6). The participants in the step exercise group were encouraged to practice the step exercise at home for 16 weeks. The step exercise improved glucose metabolism and antioxidative capacity and increased work ability by reducing absences from work and improving the prognosis of work ability. The improvement in work ability was related to a reduction in oxidative stress. These results suggest that a bench step exercise may improve work ability by reducing cardiovascular risk factors and oxidative stress.

  13. Banking biological collections: data warehousing, data mining, and data dilemmas in genomics and global health policy.

    PubMed

    Blatt, R J R

    2000-01-01

    While DNA databases may offer the opportunity to (1) assess population-based prevalence of specific genes and variants, (2) simplify the search for molecular markers, (3) improve targeted drug discovery and development for disease management, (4) refine strategies for disease prevention, and (5) provide the data necessary for evidence-based decision-making, serious scientific and social questions remain. Whether samples are identified, coded, or anonymous, biological banking raises profound ethical and legal issues pertaining to access, informed consent, privacy and confidentiality of genomic information, civil liberties, patenting, and proprietary rights. This paper provides an overview of key policy issues and questions pertaining to biological banking, with a focus on developments in specimen collection, transnational distribution, and public health and academic-industry research alliances. It highlights the challenges posed by the commercialization of genomics, and proposes the need for harmonization of biological banking policies.

  14. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives.

    PubMed

    Chelico, John D; Wilcox, Adam B; Vawdrey, David K; Kuperman, Gilad J

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.

  15. Integrating Hospital Administrative Data to Improve Health Care Efficiency and Outcomes: “The Socrates Story”

    PubMed Central

    Lawrence, Justin; Delaney, Conor P.

    2013-01-01

    Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes. PMID:24436649

  16. Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine

    PubMed Central

    Elsik, Christine G.; Tayal, Aditi; Diesh, Colin M.; Unni, Deepak R.; Emery, Marianne L.; Nguyen, Hung N.; Hagen, Darren E.

    2016-01-01

    We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. PMID:26578564

  17. BioStar models of clinical and genomic data for biomedical data warehouse design

    PubMed Central

    Wang, Liangjiang; Ramanathan, Murali

    2008-01-01

    Biomedical research is now generating large amounts of data, ranging from clinical test results to microarray gene expression profiles. The scale and complexity of these datasets give rise to substantial challenges in data management and analysis. It is highly desirable that data warehousing and online analytical processing technologies can be applied to biomedical data integration and mining. The major difficulty probably lies in the task of capturing and modelling diverse biological objects and their complex relationships. This paper describes multidimensional data modelling for biomedical data warehouse design. Since the conventional models such as star schema appear to be insufficient for modelling clinical and genomic data, we develop a new model called BioStar schema. The new model can capture the rich semantics of biomedical data and provide greater extensibility for the fast evolution of biological research methodologies. PMID:18048122

  18. Techniques for integrating ‐omics data

    PubMed Central

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data. PMID:19255651

  19. ASSET Queries: A Set-Oriented and Column-Wise Approach to Modern OLAP

    NASA Astrophysics Data System (ADS)

    Chatziantoniou, Damianos; Sotiropoulos, Yannis

    Modern data analysis has given birth to numerous grouping constructs and programming paradigms, way beyond the traditional group by. Applications such as data warehousing, web log analysis, streams monitoring and social networks understanding necessitated the use of data cubes, grouping variables, windows and MapReduce. In this paper we review the associated set (ASSET) concept and discuss its applicability in both continuous and traditional data settings. Given a set of values B, an associated set over B is just a collection of annotated data multisets, one for each b(B. The goal is to efficiently compute aggregates over these data sets. An ASSET query consists of repeated definitions of associated sets and aggregates of these, possibly correlated, resembling a spreadsheet document. We review systems implementing ASSET queries both in continuous and persistent contexts and argue for associated sets' analytical abilities and optimization opportunities.

  20. Techniques for integrating -omics data.

    PubMed

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present -omics community, because -omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.

  1. Integration, warehousing, and analysis strategies of Omics data.

    PubMed

    Gedela, Srinubabu

    2011-01-01

    "-Omics" is a current suffix for numerous types of large-scale biological data generation procedures, which naturally demand the development of novel algorithms for data storage and analysis. With next generation genome sequencing burgeoning, it is pivotal to decipher a coding site on the genome, a gene's function, and information on transcripts next to the pure availability of sequence information. To explore a genome and downstream molecular processes, we need umpteen results at the various levels of cellular organization by utilizing different experimental designs, data analysis strategies and methodologies. Here comes the need for controlled vocabularies and data integration to annotate, store, and update the flow of experimental data. This chapter explores key methodologies to merge Omics data by semantic data carriers, discusses controlled vocabularies as eXtensible Markup Languages (XML), and provides practical guidance, databases, and software links supporting the integration of Omics data.

  2. Telematic integration of health data: a practicable contribution.

    PubMed

    Guerriero, Lorenzo; Ferdeghini, Ezio M; Viola, Silvia R; Porro, Ivan; Testi, Angela; Bedini, Remo

    2011-09-01

    The patients' clinical and healthcare data should virtually be available everywhere, both to provide a more efficient and effective medical approach to their pathologies, as well as to make public healthcare decision makers able to verify the efficacy and efficiency of the adopted healthcare processes. Unfortunately, customised solutions adopted by many local Health Information Systems in Italy make it difficult to share the stored data outside their own environment. In the last years, worldwide initiatives have aimed to overcome such sharing limitation. An important issue during the passage towards standardised, integrated information systems is the possible loss of previously collected data. The herein presented project realises a suitable architecture able to guarantee reliable, automatic, user-transparent storing and retrieval of information from both modern and legacy systems. The technical and management solutions provided by the project avoid data loss and overlapping, and allow data integration and organisation suitable for data-mining and data-warehousing analysis.

  3. Integrating hospital administrative data to improve health care efficiency and outcomes: "the socrates story".

    PubMed

    Lawrence, Justin; Delaney, Conor P

    2013-03-01

    Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes.

  4. Biological data warehousing system for identifying transcriptional regulatory sites from gene expressions of microarray data.

    PubMed

    Tsou, Ann-Ping; Sun, Yi-Ming; Liu, Chia-Lin; Huang, Hsien-Da; Horng, Jorng-Tzong; Tsai, Meng-Feng; Liu, Baw-Juine

    2006-07-01

    Identification of transcriptional regulatory sites plays an important role in the investigation of gene regulation. For this propose, we designed and implemented a data warehouse to integrate multiple heterogeneous biological data sources with data types such as text-file, XML, image, MySQL database model, and Oracle database model. The utility of the biological data warehouse in predicting transcriptional regulatory sites of coregulated genes was explored using a synexpression group derived from a microarray study. Both of the binding sites of known transcription factors and predicted over-represented (OR) oligonucleotides were demonstrated for the gene group. The potential biological roles of both known nucleotides and one OR nucleotide were demonstrated using bioassays. Therefore, the results from the wet-lab experiments reinforce the power and utility of the data warehouse as an approach to the genome-wide search for important transcription regulatory elements that are the key to many complex biological systems.

  5. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives

    PubMed Central

    Chelico, John D.; Wilcox, Adam B.; Vawdrey, David K.; Kuperman, Gilad J.

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement. PMID:28269833

  6. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery

    DOE PAGES

    Stolaroff, Joshuah K.; Samaras, Constantine; O'Neill, Emma R.; ...

    2018-02-13

    Here, the use of automated, unmanned aerial vehicles (drones) to deliver commercial packages is poised to become a new industry, significantly shifting energy use in the freight sector. Here we find the current practical range of multi-copters to be about 4 km with current battery technology, requiring a new network of urban warehouses or waystations as support. We show that, although drones consume less energy per package-km than delivery trucks, the additional warehouse energy required and the longer distances traveled by drones per package greatly increase the life-cycle impacts. Still, in most cases examined, the impacts of package delivery bymore » small drone are lower than ground-based delivery. Results suggest that, if carefully deployed, drone-based delivery could reduce greenhouse gas emissions and energy use in the freight sector. To realize the environmental benefits of drone delivery, regulators and firms should focus on minimizing extra warehousing and limiting the size of drones.« less

  7. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    NASA Astrophysics Data System (ADS)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  8. Risks endemic to long-haul trucking in North America: strategies to protect and promote driver well-being.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael; Sönmez, Sevil

    2014-01-01

    Long-haul truck drivers in North America function in a work context marked by excess physical and psychological workload, erratic schedules, disrupted sleep patterns, extreme time pressures, and these factors' far-reaching consequences. These work-induced stressors are connected with excess risk for cardiometabolic disease, certain cancers, and musculoskeletal and sleep disorders, as well as highway crashes, which in turn exert enormous financial burdens on trucking and warehousing companies, governments and healthcare systems, along with working people within the sector. This article: 1) delineates the unique work environment of long-haul truckers, describing their work characteristics and duties; (2) discusses the health hazards of long-haul trucking that impact drivers, the general population, and trucking enterprises, examining how this work context induces, sustains, and exacerbates these hazards; and (3) proposes comprehensive, multi-level strategies with potential to protect and promote the health, safety, and well-being of truckers, while reducing adverse consequences for companies and highway safety.

  9. Stormwater Pollution Prevention Plan - TA-60 Material Recycling Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, Leonard Frank

    This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. This SWPPP applies to discharges of stormwater from the operational areas of the TA- 60 Material Recycling Facility at Los Alamos National Laboratory. Los Alamosmore » National Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60 Material Recycling Facility. The current permit expires at midnight on June 4, 2020.« less

  10. Stormwater Pollution Prevention Plan - TA-60 Asphalt Batch Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, Leonard Frank

    This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. This SWPPP applies to discharges of stormwater from the operational areas of the TA-60-01 Asphalt Batch Plant at Los Alamos National Laboratory. Los Alamos Nationalmore » Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60 Asphalt Batch Plant and associated areas. The current permit expires at midnight on June 4, 2020.« less

  11. Non-Fatal Occupational Falls on the Same Level

    PubMed Central

    Yeoh, Han T.; Lockhart, Thurmon E.; Wu, Xuefang

    2012-01-01

    The purpose of this study was to describe antecedents and characteristics of same level fall injuries. Fall incidents and costs were compiled from the Bureau of Labor Statistics and other sources from 2006–2010. This study indicated that over 29% of “fall on same level” injuries resulted in 31 or more workdays lost. The major source of injury was “floors, walkways or ground surfaces” and the most affected body parts were the lower extremities and the trunk. In regards to gender and age, female workers had the highest risk of falls, while advancing age coincided with an increase in incidence rates. Overall, workers in the health care and social assistance industry, the transportation and warehousing industry, and the accommodation and food services industry had the highest risk for “fall on same level” injuries. Furthermore, the overall compensation cost increased 25% from 2006–2009. Along with existing evidence, these results may facilitate the design and implementation of preventative measures in the workplace and potentially reduce fall-related compensation costs. PMID:23216368

  12. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolaroff, Joshuah K.; Samaras, Constantine; O'Neill, Emma R.

    Here, the use of automated, unmanned aerial vehicles (drones) to deliver commercial packages is poised to become a new industry, significantly shifting energy use in the freight sector. Here we find the current practical range of multi-copters to be about 4 km with current battery technology, requiring a new network of urban warehouses or waystations as support. We show that, although drones consume less energy per package-km than delivery trucks, the additional warehouse energy required and the longer distances traveled by drones per package greatly increase the life-cycle impacts. Still, in most cases examined, the impacts of package delivery bymore » small drone are lower than ground-based delivery. Results suggest that, if carefully deployed, drone-based delivery could reduce greenhouse gas emissions and energy use in the freight sector. To realize the environmental benefits of drone delivery, regulators and firms should focus on minimizing extra warehousing and limiting the size of drones.« less

  13. Same-level fall injuries in US workplaces by age group, gender, and industry.

    PubMed

    Scott, Kenneth A; Fisher, Gwenith G; Barón, Anna E; Tompa, Emile; Stallones, Lorann; DiGuiseppi, Carolyn

    2018-02-01

    As the workforce ages, occupational injuries from falls on the same level will increase. Some industries may be more affected than others. We conducted a cross-sectional study using data from the Bureau of Labor Statistics to estimate same-level fall injury incidence rates by age group, gender, and industry for four sectors: 1) healthcare and social assistance; 2) manufacturing; 3) retail; and 4) transportation and warehousing. We calculated rate ratios and rate differences by age group and gender. Same-level fall injury incidence rates increase with age in all four sectors. However, patterns of rate ratios and rate differences vary by age group, gender, and industry. Younger workers, men, and manufacturing workers generally have lower rates. Variation in incidence rates suggests there are unrealized opportunities to prevent same-level fall injuries. Interventions should be evaluated for their effectiveness at reducing injuries, avoiding gender- or age-discrimination and improving work ability. © 2017 Wiley Periodicals, Inc.

  14. Decision support and data warehousing tools boost competitive advantage.

    PubMed

    Waldo, B H

    1998-01-01

    The ability to communicate across the care continuum is fast becoming an integral component of the successful health enterprise. As integrated delivery systems are formed and patient care delivery is restructured, health care professionals must be able to distribute, access, and evaluate information across departments and care settings. The Aberdeen Group, a computer and communications research and consulting organization, believes that "the single biggest challenge for next-generation health care providers is to improve on how they consolidate and manage information across the continuum of care. This involves building a strategic warehouse of clinical and financial information that can be shared and leveraged by health care professionals, regardless of the location or type of care setting" (Aberdeen Group, Inc., 1997). The value and importance of data and systems integration are growing. Organizations that create a strategy and implement DSS tools to provide decision-makers with the critical information they need to face the competition and maintain quality and costs will have the advantage.

  15. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery.

    PubMed

    Stolaroff, Joshuah K; Samaras, Constantine; O'Neill, Emma R; Lubers, Alia; Mitchell, Alexandra S; Ceperley, Daniel

    2018-02-13

    The use of automated, unmanned aerial vehicles (drones) to deliver commercial packages is poised to become a new industry, significantly shifting energy use in the freight sector. Here we find the current practical range of multi-copters to be about 4 km with current battery technology, requiring a new network of urban warehouses or waystations as support. We show that, although drones consume less energy per package-km than delivery trucks, the additional warehouse energy required and the longer distances traveled by drones per package greatly increase the life-cycle impacts. Still, in most cases examined, the impacts of package delivery by small drone are lower than ground-based delivery. Results suggest that, if carefully deployed, drone-based delivery could reduce greenhouse gas emissions and energy use in the freight sector. To realize the environmental benefits of drone delivery, regulators and firms should focus on minimizing extra warehousing and limiting the size of drones.

  16. On Robust Methodologies for Managing Public Health Care Systems

    PubMed Central

    Nimmagadda, Shastri L.; Dreher, Heinz V.

    2014-01-01

    Authors focus on ontology-based multidimensional data warehousing and mining methodologies, addressing various issues on organizing, reporting and documenting diabetic cases and their associated ailments, including causalities. Map and other diagnostic data views, depicting similarity and comparison of attributes, extracted from warehouses, are used for understanding the ailments, based on gender, age, geography, food-habits and other hereditary event attributes. In addition to rigor on data mining and visualization, an added focus is on values of interpretation of data views, from processed full-bodied diagnosis, subsequent prescription and appropriate medications. The proposed methodology, is a robust back-end application, for web-based patient-doctor consultations and e-Health care management systems through which, billions of dollars spent on medical services, can be saved, in addition to improving quality of life and average life span of a person. Government health departments and agencies, private and government medical practitioners including social welfare organizations are typical users of these systems. PMID:24445953

  17. Storage and retrieval of medical images from data warehouses

    NASA Astrophysics Data System (ADS)

    Tikekar, Rahul V.; Fotouhi, Farshad A.; Ragan, Don P.

    1995-11-01

    As our applications continue to become more sophisticated, the demand for more storage continues to rise. Hence many businesses are looking toward data warehousing technology to satisfy their storage needs. A warehouse is different from a conventional database and hence deserves a different approach while storing data that might be retrieved at a later point in time. In this paper we look at the problem of storing and retrieving medical image data from a warehouse. We regard the warehouse as a pyramid with fast storage devices at the top and slower storage devices at the bottom. Our approach is to store the most needed information abstract at the top of the pyramid and more detailed and storage consuming data toward the end of the pyramid. This information is linked for browsing purposes. In a similar fashion, during the retrieval of data, the user is given a sample representation with browse option of the detailed data and, as required, more and more details are made available.

  18. A Dimensional Bus model for integrating clinical and research data.

    PubMed

    Wade, Ted D; Hum, Richard C; Murphy, James R

    2011-12-01

    Many clinical research data integration platforms rely on the Entity-Attribute-Value model because of its flexibility, even though it presents problems in query formulation and execution time. The authors sought more balance in these traits. Borrowing concepts from Entity-Attribute-Value and from enterprise data warehousing, the authors designed an alternative called the Dimensional Bus model and used it to integrate electronic medical record, sponsored study, and biorepository data. Each type of observational collection has its own table, and the structure of these tables varies to suit the source data. The observational tables are linked to the Bus, which holds provenance information and links to various classificatory dimensions that amplify the meaning of the data or facilitate its query and exposure management. The authors implemented a Bus-based clinical research data repository with a query system that flexibly manages data access and confidentiality, facilitates catalog search, and readily formulates and compiles complex queries. The design provides a workable way to manage and query mixed schemas in a data warehouse.

  19. Booly: a new data integration platform.

    PubMed

    Do, Long H; Esteves, Francisco F; Karten, Harvey J; Bier, Ethan

    2010-10-13

    Data integration is an escalating problem in bioinformatics. We have developed a web tool and warehousing system, Booly, that features a simple yet flexible data model coupled with the ability to perform powerful comparative analysis, including the use of Boolean logic to merge datasets together, and an integrated aliasing system to decipher differing names of the same gene or protein. Furthermore, Booly features a collaborative sharing system and a public repository so that users can retrieve new datasets while contributors can easily disseminate new content. We illustrate the uses of Booly with several examples including: the versatile creation of homebrew datasets, the integration of heterogeneous data to identify genes useful for comparing avian and mammalian brain architecture, and generation of a list of Food and Drug Administration (FDA) approved drugs with possible alternative disease targets. The Booly paradigm for data storage and analysis should facilitate integration between disparate biological and medical fields and result in novel discoveries that can then be validated experimentally. Booly can be accessed at http://booly.ucsd.edu.

  20. Booly: a new data integration platform

    PubMed Central

    2010-01-01

    Background Data integration is an escalating problem in bioinformatics. We have developed a web tool and warehousing system, Booly, that features a simple yet flexible data model coupled with the ability to perform powerful comparative analysis, including the use of Boolean logic to merge datasets together, and an integrated aliasing system to decipher differing names of the same gene or protein. Furthermore, Booly features a collaborative sharing system and a public repository so that users can retrieve new datasets while contributors can easily disseminate new content. Results We illustrate the uses of Booly with several examples including: the versatile creation of homebrew datasets, the integration of heterogeneous data to identify genes useful for comparing avian and mammalian brain architecture, and generation of a list of Food and Drug Administration (FDA) approved drugs with possible alternative disease targets. Conclusions The Booly paradigm for data storage and analysis should facilitate integration between disparate biological and medical fields and result in novel discoveries that can then be validated experimentally. Booly can be accessed at http://booly.ucsd.edu. PMID:20942966

  1. Hymenoptera Genome Database: integrating genome annotations in HymenopteraMine.

    PubMed

    Elsik, Christine G; Tayal, Aditi; Diesh, Colin M; Unni, Deepak R; Emery, Marianne L; Nguyen, Hung N; Hagen, Darren E

    2016-01-04

    We report an update of the Hymenoptera Genome Database (HGD) (http://HymenopteraGenome.org), a model organism database for insect species of the order Hymenoptera (ants, bees and wasps). HGD maintains genomic data for 9 bee species, 10 ant species and 1 wasp, including the versions of genome and annotation data sets published by the genome sequencing consortiums and those provided by NCBI. A new data-mining warehouse, HymenopteraMine, based on the InterMine data warehousing system, integrates the genome data with data from external sources and facilitates cross-species analyses based on orthology. New genome browsers and annotation tools based on JBrowse/WebApollo provide easy genome navigation, and viewing of high throughput sequence data sets and can be used for collaborative genome annotation. All of the genomes and annotation data sets are combined into a single BLAST server that allows users to select and combine sequence data sets to search. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Modeling of ETL-Processes and Processed Information in Clinical Data Warehousing.

    PubMed

    Tute, Erik; Steiner, Jochen

    2018-01-01

    Literature describes a big potential for reuse of clinical patient data. A clinical data warehouse (CDWH) is a means for that. To support management and maintenance of processes extracting, transforming and loading (ETL) data into CDWHs as well as to ease reuse of metadata between regular IT-management, CDWH and secondary data users by providing a modeling approach. Expert survey and literature review to find requirements and existing modeling techniques. An ETL-modeling-technique was developed extending existing modeling techniques. Evaluation by exemplarily modeling existing ETL-process and a second expert survey. Nine experts participated in the first survey. Literature review yielded 15 included publications. Six existing modeling techniques were identified. A modeling technique extending 3LGM2 and combining it with openEHR information models was developed and evaluated. Seven experts participated in the evaluation. The developed approach can help in management and maintenance of ETL-processes and could serve as interface between regular IT-management, CDWH and secondary data users.

  3. Life cycle of a data warehousing project in healthcare.

    PubMed

    Verma, R; Harper, J

    2001-01-01

    Hill Physicians Medical Group (and its medical management firm, PriMed Management) early on recognized the need for a data warehouse. Management demanded that data from many sources be integrated, cleansed, and formatted. As a first step, an operational data store (ODS) was built and populated with data from the main transactional system; encounter data were added. The ODS has served its purpose well and has whetted management's appetite for more information and faster, more reliable access, all in one location. PriMed hired Annams Systems Consulting (Annams) for this effort. A team was formed, made up of consultants from Annams and members of PriMed's information services (IS) team. The "classical" approach is being taken: enhancing the ODS, which is largely normalized in structure, and integrating data from various sources, along with enforcing business rules. The team is designing and implementing data marts and a "star schema" style of data modeling--a useful tool for management to evaluate results before investing further.

  4. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less

  5. Coronary heart disease prevalence and occupational structure in U.S. metropolitan areas: a multilevel analysis.

    PubMed

    Michimi, Akihiko; Ellis-Griffith, Gregory; Nagy, Christine; Peterson, Tina

    2013-05-01

    This research explored the link between coronary heart disease (CHD) prevalence and metropolitan-area level occupational structure among 137 metropolitan/micropolitan statistical areas (MMSA) in the United States. Using data from the 2006-2008 Behavioral Risk Factor Surveillance System and 2007 County Business Patterns, logistic mixed models were developed to estimate CHD prevalence between MMSAs controlling for individual-level socioeconomic characteristics and various types of occupational structure. Results showed that CHD prevalence was lower in MMSAs where their economy was dominated by 'tourism and resort' and 'the quaternary sector' and higher in MMSAs dominated by 'manufacturing', 'transportation and warehousing', and 'mining'. MMSA-level effects on CHD were found in 'tourism and resort' and 'the quaternary sector' having lower risk and 'mining' having higher risk of CHD. Although these effects prevailed in many MMSAs, some MMSAs did not fit into these effects. Additional analysis indicated a possible link between metropolitan population loss and higher CHD prevalence especially in the coal mining region of the Appalachian Mountains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Discovering Knowledge from AIS Database for Application in VTS

    NASA Astrophysics Data System (ADS)

    Tsou, Ming-Cheng

    The widespread use of the Automatic Identification System (AIS) has had a significant impact on maritime technology. AIS enables the Vessel Traffic Service (VTS) not only to offer commonly known functions such as identification, tracking and monitoring of vessels, but also to provide rich real-time information that is useful for marine traffic investigation, statistical analysis and theoretical research. However, due to the rapid accumulation of AIS observation data, the VTS platform is often unable quickly and effectively to absorb and analyze it. Traditional observation and analysis methods are becoming less suitable for the modern AIS generation of VTS. In view of this, we applied the same data mining technique used for business intelligence discovery (in Customer Relation Management (CRM) business marketing) to the analysis of AIS observation data. This recasts the marine traffic problem as a business-marketing problem and integrates technologies such as Geographic Information Systems (GIS), database management systems, data warehousing and data mining to facilitate the discovery of hidden and valuable information in a huge amount of observation data. Consequently, this provides the marine traffic managers with a useful strategic planning resource.

  7. Optimization of RFID network planning using Zigbee and WSN

    NASA Astrophysics Data System (ADS)

    Hasnan, Khalid; Ahmed, Aftab; Badrul-aisham, Bakhsh, Qadir

    2015-05-01

    Everyone wants to be ease in their life. Radio frequency identification (RFID) wireless technology is used to make our life easier. RFID technology increases productivity, accuracy and convenience in delivery of service in supply chain. It is used for various applications such as preventing theft of automobiles, tolls collection without stopping, no checkout lines at grocery stores, managing traffic, hospital management, corporate campuses and airports, mobile asset tracking, warehousing, tracking library books, and to track a wealth of assets in supply chain management. Efficiency of RFID can be enhanced by integrating with wireless sensor network (WSN), zigbee mesh network and internet of things (IOT). The proposed system is used for identifying, sensing and real-time locating system (RTLS) of items in an indoor heterogeneous region. The system gives real-time richer information of object's characteristics, location and their environmental parameters like temperature, noise and humidity etc. RTLS reduce human error, optimize inventory management, increase productivity and information accuracy at indoor heterogeneous network. The power consumption and the data transmission rate of the system can be minimized by using low power hardware design.

  8. The Prospect of Internet of Things and Big Data Analytics in Transportation System

    NASA Astrophysics Data System (ADS)

    Noori Hussein, Waleed; Kamarudin, L. M.; Hussain, Haider N.; Zakaria, A.; Badlishah Ahmed, R.; Zahri, N. A. H.

    2018-05-01

    Internet of Things (IoT); the new dawn technology that describes how data, people and interconnected physical objects act based on communicated information, and big data analytics have been adopted by diverse domains for varying purposes. Manufacturing, agriculture, banks, oil and gas, healthcare, retail, hospitality, and food services are few of the sectors that have adopted and massively utilized IoT and big data analytics. The transportation industry is also an early adopter, with significant attendant effects on its processes of tracking shipment, freight monitoring, and transparent warehousing. This is recorded in countries like England, Singapore, Portugal, and Germany, while Malaysia is currently assessing the potentials and researching a purpose-driven adoption and implementation. This paper, based on review of related literature, presents a summary of the inherent prospects in adopting IoT and big data analytics in the Malaysia transportation system. Efficient and safe port environment, predictive maintenance and remote management, boundary-less software platform and connected ecosystem, among others, are the inherent benefits in the IoT and big data analytics for the Malaysia transportation system.

  9. Extending the NIF DISCO framework to automate complex workflow: coordinating the harvest and integration of data from diverse neuroscience information resources

    PubMed Central

    Marenco, Luis N.; Wang, Rixin; Bandrowski, Anita E.; Grethe, Jeffrey S.; Shepherd, Gordon M.; Miller, Perry L.

    2014-01-01

    This paper describes how DISCO, the data aggregator that supports the Neuroscience Information Framework (NIF), has been extended to play a central role in automating the complex workflow required to support and coordinate the NIF’s data integration capabilities. The NIF is an NIH Neuroscience Blueprint initiative designed to help researchers access the wealth of data related to the neurosciences available via the Internet. A central component is the NIF Federation, a searchable database that currently contains data from 231 data and information resources regularly harvested, updated, and warehoused in the DISCO system. In the past several years, DISCO has greatly extended its functionality and has evolved to play a central role in automating the complex, ongoing process of harvesting, validating, integrating, and displaying neuroscience data from a growing set of participating resources. This paper provides an overview of DISCO’s current capabilities and discusses a number of the challenges and future directions related to the process of coordinating the integration of neuroscience data within the NIF Federation. PMID:25018728

  10. The high cost of the international aging prisoner crisis: well-being as the common denominator for action.

    PubMed

    Maschi, Tina; Viola, Deborah; Sun, Fei

    2013-08-01

    The aging prisoner crisis continues to gain international attention as the high human, social, and economic costs of warehousing older adults with complex physical, mental health, and social care needs in prison continues to rise. According to the United Nations, older adults and the serious and terminally ill are considered special needs populations subject to special international health and social practice and policy considerations. We argue that older adults in prison have unique individual and social developmental needs that result from life course exposure to cumulative risk factors compounded by prison conditions that accelerate their aging. We position these factors in a social context model of human development and well-being and present a review of international human rights guidelines that pertain to promoting health and well-being to those aging in custody. The study concludes with promising practices and recommendations of their potential to reduce the high direct and indirect economic costs associated with mass confinement of older adults, many of whom need specialized long-term care that global correctional systems are inadequately equipped to provide.

  11. Multi-Level Bitmap Indexes for Flash Memory Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Madduri, Kamesh; Canon, Shane

    2010-07-23

    Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less

  12. Breaking the Curse of Cardinality on Bitmap Indexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Wu, Kesheng; Stockinger, Kurt

    2008-04-04

    Bitmap indexes are known to be efficient for ad-hoc range queries that are common in data warehousing and scientific applications. However, they suffer from the curse of cardinality, that is, their efficiency deteriorates as attribute cardinalities increase. A number of strategies have been proposed, but none of them addresses the problem adequately. In this paper, we propose a novel binned bitmap index that greatly reduces the cost to answer queries, and therefore breaks the curse of cardinality. The key idea is to augment the binned index with an Order-preserving Bin-based Clustering (OrBiC) structure. This data structure significantly reduces the I/Omore » operations needed to resolve records that cannot be resolved with the bitmaps. To further improve the proposed index structure, we also present a strategy to create single-valued bins for frequent values. This strategy reduces index sizes and improves query processing speed. Overall, the binned indexes with OrBiC great improves the query processing speed, and are 3 - 25 times faster than the best available indexes for high-cardinality data.« less

  13. Stormwater Pollution Prevention Plan for the TA-60-01 Heavy Equipment Shop, Los Alamos National Laboratory, Revision 3, January 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgin, Jillian Elizabeth

    This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. This SWPPP applies to discharges of stormwater from the operational areas of the TA-60-01 Heavy Equipment Shop at Los Alamos National Laboratory. Los Alamos Nationalmore » Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60-01 Heavy Equipment Shop and associated areas. The current permit expires at midnight on June 4, 2020.« less

  14. Data management for geospatial vulnerability assessment of interdependencies in US power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shih, C.Y.; Scown, C.D.; Soibelman, L.

    2009-09-15

    Critical infrastructures maintain our society's stability, security, and quality of life. These systems are also interdependent, which means that the disruption of one infrastructure system can significantly impact the operation of other systems. Because of the heavy reliance on electricity production, it is important to assess possible vulnerabilities. Determining the source of these vulnerabilities can provide insight for risk management and emergency response efforts. This research uses data warehousing and visualization techniques to explore the interdependencies between coal mines, rail transportation, and electric power plants. By merging geospatial and nonspatial data, we are able to model the potential impacts ofmore » a disruption to one or more mines, rail lines, or power plants, and visually display the results using a geographical information system. A scenario involving a severe earthquake in the New Madrid Seismic Zone is used to demonstrate the capabilities of the model when given input in the form of a potentially impacted area. This type of interactive analysis can help decision makers to understand the vulnerabilities of the coal distribution network and the potential impact it can have on electricity production.« less

  15. BigMouth: a multi-institutional dental data repository.

    PubMed

    Walji, Muhammad F; Kalenderian, Elsbeth; Stark, Paul C; White, Joel M; Kookal, Krishna K; Phan, Dat; Tran, Duong; Bernstam, Elmer V; Ramoni, Rachel

    2014-01-01

    Few oral health databases are available for research and the advancement of evidence-based dentistry. In this work we developed a centralized data repository derived from electronic health records (EHRs) at four dental schools participating in the Consortium of Oral Health Research and Informatics. A multi-stakeholder committee developed a data governance framework that encouraged data sharing while allowing control of contributed data. We adopted the i2b2 data warehousing platform and mapped data from each institution to a common reference terminology. We realized that dental EHRs urgently need to adopt common terminologies. While all used the same treatment code set, only three of the four sites used a common diagnostic terminology, and there were wide discrepancies in how medical and dental histories were documented. BigMouth was successfully launched in August 2012 with data on 1.1 million patients, and made available to users at the contributing institutions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Extending the NIF DISCO framework to automate complex workflow: coordinating the harvest and integration of data from diverse neuroscience information resources.

    PubMed

    Marenco, Luis N; Wang, Rixin; Bandrowski, Anita E; Grethe, Jeffrey S; Shepherd, Gordon M; Miller, Perry L

    2014-01-01

    This paper describes how DISCO, the data aggregator that supports the Neuroscience Information Framework (NIF), has been extended to play a central role in automating the complex workflow required to support and coordinate the NIF's data integration capabilities. The NIF is an NIH Neuroscience Blueprint initiative designed to help researchers access the wealth of data related to the neurosciences available via the Internet. A central component is the NIF Federation, a searchable database that currently contains data from 231 data and information resources regularly harvested, updated, and warehoused in the DISCO system. In the past several years, DISCO has greatly extended its functionality and has evolved to play a central role in automating the complex, ongoing process of harvesting, validating, integrating, and displaying neuroscience data from a growing set of participating resources. This paper provides an overview of DISCO's current capabilities and discusses a number of the challenges and future directions related to the process of coordinating the integration of neuroscience data within the NIF Federation.

  17. Map Matching and Real World Integrated Sensor Data Warehousing (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burton, E.

    2014-02-01

    The inclusion of interlinked temporal and spatial elements within integrated sensor data enables a tremendous degree of flexibility when analyzing multi-component datasets. The presentation illustrates how to warehouse, process, and analyze high-resolution integrated sensor datasets to support complex system analysis at the entity and system levels. The example cases presented utilizes in-vehicle sensor system data to assess vehicle performance, while integrating a map matching algorithm to link vehicle data to roads to demonstrate the enhanced analysis possible via interlinking data elements. Furthermore, in addition to the flexibility provided, the examples presented illustrate concepts of maintaining proprietary operational information (Fleet DNA)more » and privacy of study participants (Transportation Secure Data Center) while producing widely distributed data products. Should real-time operational data be logged at high resolution across multiple infrastructure types, map matched to their associated infrastructure, and distributed employing a similar approach; dependencies between urban environment infrastructures components could be better understood. This understanding is especially crucial for the cities of the future where transportation will rely more on grid infrastructure to support its energy demands.« less

  18. Hiring discrimination against people with disabilities under the ADA: characteristics of employers.

    PubMed

    McMahon, Brian T; Rumrill, Philip D; Roessler, Richard; Hurley, Jessica E; West, Steven L; Chan, Fong; Carlson, Linnea

    2008-06-01

    This article describes findings from a causal comparative study of the characteristics of employers against whom allegations of hiring discrimination were filed with the U.S. Equal Employment Opportunity Commission (EEOC) under Title I of the Americans with Disabilities Act (ADA) between 1992 and 2005. Employer characteristics derived from 19,527 closed Hiring allegations are compared and contrasted to 259,680 closed allegations aggregated from six other prevalent forms of discrimination including Discharge and Constructive Discharge, Reasonable Accommodation, Disability Harassment and Intimidation, and Terms and Conditions of Employment. Tests of Proportion distributed as chi-square are used to form comparisons along a variety of factors including industry classification, size of workforce, and location. As compared to non-hiring allegations, hiring allegations were more likely to be filed against employers with 15-100 employees, in the West U.S. Census track region, or in industries including educational services; public administration; transportation and warehousing; professional, scientific, and technical services; agriculture, forestry, fishing, and hunting; and construction. More outreach regarding ADA responsibilities appears indicated for those employers who share the aforementioned characteristics.

  19. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  20. Work-related musculoskeletal disorder surveillance using the Washington state workers' compensation system: Recent declines and patterns by industry, 1999-2013.

    PubMed

    Marcum, Jennifer; Adams, Darrin

    2017-05-01

    Work-related musculoskeletal disorders (WMSDs) are common and place large economic and social burdens on workers and their communities. We describe recent WMSD trends and patterns of WMSD incidence among the Washington worker population by industry. We used Washington State's workers' compensation compensable claims from 1999 to 2013 to describe incidence and cost of WMSD claims by body part and diagnosis, and to identify high-risk industries. WMSD claim rates declined by an estimated annual 5.4% (95% CI: 5.0-5.9%) in Washington State from 1999 to 2013, but WMSDs continue to account for over 40% of all compensable claims. High risk industries identified were Construction; Transportation and Warehousing; Health Care and Social Assistance; and Manufacturing. As documented in other North American contexts, this study describes an important decline in the incidence of WMSDs. The Washington State workers' compensation system provides a rich data source for the surveillance of WMSDs. © Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  1. Stormwater Pollution Prevention Plan - TA-60 Roads and Grounds Facility and Associated Sigma Mesa Staging Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, Leonard Frank

    This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. This SWPPP applies to discharges of stormwater from the operational areas of the TA-60 Roads and Grounds and Associated Sigma Mesa Staging Area at Losmore » Alamos National Laboratory. Los Alamos National Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60 Roads and Grounds and Associated Sigma Mesa Staging Area. The current permit expires at midnight on June 4, 2020.« less

  2. Ontology-based data integration between clinical and research systems.

    PubMed

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.

  3. Ensembl BioMarts: a hub for data retrieval across taxonomic space.

    PubMed

    Kinsella, Rhoda J; Kähäri, Andreas; Haider, Syed; Zamora, Jorge; Proctor, Glenn; Spudich, Giulietta; Almeida-King, Jeff; Staines, Daniel; Derwent, Paul; Kerhornou, Arnaud; Kersey, Paul; Flicek, Paul

    2011-01-01

    For a number of years the BioMart data warehousing system has proven to be a valuable resource for scientists seeking a fast and versatile means of accessing the growing volume of genomic data provided by the Ensembl project. The launch of the Ensembl Genomes project in 2009 complemented the Ensembl project by utilizing the same visualization, interactive and programming tools to provide users with a means for accessing genome data from a further five domains: protists, bacteria, metazoa, plants and fungi. The Ensembl and Ensembl Genomes BioMarts provide a point of access to the high-quality gene annotation, variation data, functional and regulatory annotation and evolutionary relationships from genomes spanning the taxonomic space. This article aims to give a comprehensive overview of the Ensembl and Ensembl Genomes BioMarts as well as some useful examples and a description of current data content and future objectives. Database URLs: http://www.ensembl.org/biomart/martview/; http://metazoa.ensembl.org/biomart/martview/; http://plants.ensembl.org/biomart/martview/; http://protists.ensembl.org/biomart/martview/; http://fungi.ensembl.org/biomart/martview/; http://bacteria.ensembl.org/biomart/martview/.

  4. A Dimensional Bus model for integrating clinical and research data

    PubMed Central

    Hum, Richard C; Murphy, James R

    2011-01-01

    Objectives Many clinical research data integration platforms rely on the Entity–Attribute–Value model because of its flexibility, even though it presents problems in query formulation and execution time. The authors sought more balance in these traits. Materials and Methods Borrowing concepts from Entity–Attribute–Value and from enterprise data warehousing, the authors designed an alternative called the Dimensional Bus model and used it to integrate electronic medical record, sponsored study, and biorepository data. Each type of observational collection has its own table, and the structure of these tables varies to suit the source data. The observational tables are linked to the Bus, which holds provenance information and links to various classificatory dimensions that amplify the meaning of the data or facilitate its query and exposure management. Results The authors implemented a Bus-based clinical research data repository with a query system that flexibly manages data access and confidentiality, facilitates catalog search, and readily formulates and compiles complex queries. Conclusion The design provides a workable way to manage and query mixed schemas in a data warehouse. PMID:21856687

  5. Systematic analysis of snake neurotoxins' functional classification using a data warehousing approach.

    PubMed

    Siew, Joyce Phui Yee; Khan, Asif M; Tan, Paul T J; Koh, Judice L Y; Seah, Seng Hong; Koo, Chuay Yeng; Chai, Siaw Ching; Armugam, Arunmozhiarasi; Brusic, Vladimir; Jeyaseelan, Kandiah

    2004-12-12

    Sequence annotations, functional and structural data on snake venom neurotoxins (svNTXs) are scattered across multiple databases and literature sources. Sequence annotations and structural data are available in the public molecular databases, while functional data are almost exclusively available in the published articles. There is a need for a specialized svNTXs database that contains NTX entries, which are organized, well annotated and classified in a systematic manner. We have systematically analyzed svNTXs and classified them using structure-function groups based on their structural, functional and phylogenetic properties. Using conserved motifs in each phylogenetic group, we built an intelligent module for the prediction of structural and functional properties of unknown NTXs. We also developed an annotation tool to aid the functional prediction of newly identified NTXs as an additional resource for the venom research community. We created a searchable online database of NTX proteins sequences (http://research.i2r.a-star.edu.sg/Templar/DB/snake_neurotoxin). This database can also be found under Swiss-Prot Toxin Annotation Project website (http://www.expasy.org/sprot/).

  6. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cantcer Research Database

    PubMed Central

    Bichutskiy, Vadim Y.; Colman, Richard; Brachmann, Rainer K.; Lathrop, Richard H.

    2006-01-01

    Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.) PMID:19458771

  7. Assessing a cross-border logistics policy using a performance measurement system framework: the case of Hong Kong and the Pearl River Delta region

    NASA Astrophysics Data System (ADS)

    Wong, David W. C.; Choy, K. L.; Chow, Harry K. H.; Lin, Canhong

    2014-06-01

    For the most rapidly growing economic entity in the world, China, a new logistics operation called the indirect cross-border supply chain model has recently emerged. The primary idea of this model is to reduce logistics costs by storing goods at a bonded warehouse with low storage cost in certain Chinese regions, such as the Pearl River Delta (PRD). This research proposes a performance measurement system (PMS) framework to assess the direct and indirect cross-border supply chain models. The PMS covers four categories including cost, time, quality and flexibility in the assessment of the performance of direct and indirect models. Furthermore, a survey was conducted to investigate the logistics performance of third party logistics (3PLs) at the PRD regions, including Guangzhou, Shenzhen and Hong Kong. The significance of the proposed PMS framework allows 3PLs accurately pinpoint the weakness and strengths of it current operations policy at four major performance measurement categories. Hence, this helps 3PLs further enhance the competitiveness and operations efficiency through better resources allocation at the area of warehousing and transportation.

  8. Adapting Experiential Learning to Develop Problem-Solving Skills in Deaf and Hard-of-Hearing Engineering Students.

    PubMed

    Marshall, Matthew M; Carrano, Andres L; Dannels, Wendy A

    2016-10-01

    Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and best practices of STEM instruction to give first-year DHH students enrolled in a postsecondary STEM program the opportunity to develop problem-solving skills in real-world scenarios. Using an industrial engineering laboratory that provides manufacturing and warehousing environments, students were immersed in real-world scenarios in which they worked on teams to address prescribed problems encountered during the activities. The highly structured, Plan-Do-Check-Act approach commonly used in industry was adapted for the DHH student participants to document and communicate the problem-solving steps. Students who experienced the intervention realized a 14.6% improvement in problem-solving proficiency compared with a control group, and this gain was retained at 6 and 12 months, post-intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Ontology-Based Data Integration between Clinical and Research Systems

    PubMed Central

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded ele-ments, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of rele-vant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it. PMID:25588043

  10. Occupation and chronic bronchitis among Chinese women

    PubMed Central

    Krstev, Srmena; Ji, Bu-Tian; Shu, Xiao-Ou; Gao, Yu-Tang; Blair, Aaron; Lubin, Jay; Vermeulen, Roel; Dosemeci, Mustafa; Zheng, Wei; Rothman, Nathaniel; Chow, Wong-Ho

    2011-01-01

    Objective To examine the association between occupation and chronic bronchitis among a cross-section of Chinese women who participated in the Shanghai Women’s Health Study (SWHS). Methods Cases were 4,873 women who self-reported a physician-diagnosed bronchitis during adulthood. Controls were 9,746 women randomly selected from SWHS participants and matched to the cases by year of birth and age at diagnosis. Lifetime occupational histories were obtained. Logistic regressions were used to evaluate the association between chronic bronchitis and occupation, adjusting for smoking, education, family income, and concurrent asthma. Results We observed excess prevalence of bronchitis for textile occupation (OR=1.09; 1.01–1.18) and industry (OR=1.11; 1.04–1.25), welders (OR=1.40; 1.01–1.92), packing and baling workers (OR=1.39; 1.15–1.68), and warehousing industry (OR=1.58; 1.08–2.30). We also identified several new associations that may warrant further exploration and confirmation, including employment in some metal fabrication industries, postal and telecommunication industry, and a few white collar occupations and industries. Conclusions Our study indicates that the risk of chronic bronchitis among women may be increased in some occupations and industries. PMID:18188083

  11. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  12. Using ZFIN: Data Types, Organization, and Retrieval.

    PubMed

    Van Slyke, Ceri E; Bradford, Yvonne M; Howe, Douglas G; Fashena, David S; Ramachandran, Sridhar; Ruzicka, Leyla

    2018-01-01

    The Zebrafish Model Organism Database (ZFIN; zfin.org) was established in 1994 as the primary genetic and genomic resource for the zebrafish research community. Some of the earliest records in ZFIN were for people and laboratories. Since that time, services and data types provided by ZFIN have grown considerably. Today, ZFIN provides the official nomenclature for zebrafish genes, mutants, and transgenics and curates many data types including gene expression, phenotypes, Gene Ontology, models of human disease, orthology, knockdown reagents, transgenic constructs, and antibodies. Ontologies are used throughout ZFIN to structure these expertly curated data. An integrated genome browser provides genomic context for genes, transgenics, mutants, and knockdown reagents. ZFIN also supports a community wiki where the research community can post new antibody records and research protocols. Data in ZFIN are accessible via web pages, download files, and the ZebrafishMine (zebrafishmine.org), an installation of the InterMine data warehousing software. Searching for data at ZFIN utilizes both parameterized search forms and a single box search for searching or browsing data quickly. This chapter aims to describe the primary ZFIN data and services, and provide insight into how to use and interpret ZFIN searches, data, and web pages.

  13. [Organization and technology in the catering sector].

    PubMed

    Tinarelli, Arnaldo

    2014-01-01

    The catering industry is a service characterized by a contract between customer and supplier. In institutional catering industry, the customer is represented by public administration; in private catering industry, the customer is represented by privates. The annual catering trades size is about 6.74 billions of euros, equally distributed between health sector (hospitals, nursing homes), school sector and business sector (ivorkplace food service), with the participation of nearly 1.200 firms and 70.000 workers. Major services include off-premises catering (food prepared away from the location where it's served) and on-premises catering (meals prepared and served at the same place). Several tools and machineries are used during both warehousing and food refrigerating operations, and during preparation, cooking, packaging and transport of meals. In this sector, injuries, rarely resulting serious or deadly, show a downward trend in the last years. On the contrary, the number of occupational diseases shows an upward trend. About the near future, the firms should become global outsourcer, able to provide other services as cleaning, transport and maintenance. In addition, they should invest in innovation: from tools and machineries technology to work organization; from factory lay-out to safely and health in the workplaces.

  14. Technical description of RODS: a real-time public health surveillance system.

    PubMed

    Tsui, Fu-Chiang; Espino, Jeremy U; Dato, Virginia M; Gesteland, Per H; Hutman, Judith; Wagner, Michael M

    2003-01-01

    This report describes the design and implementation of the Real-time Outbreak and Disease Surveillance (RODS) system, a computer-based public health surveillance system for early detection of disease outbreaks. Hospitals send RODS data from clinical encounters over virtual private networks and leased lines using the Health Level 7 (HL7) message protocol. The data are sent in real time. RODS automatically classifies the registration chief complaint from the visit into one of seven syndrome categories using Bayesian classifiers. It stores the data in a relational database, aggregates the data for analysis using data warehousing techniques, applies univariate and multivariate statistical detection algorithms to the data, and alerts users of when the algorithms identify anomalous patterns in the syndrome counts. RODS also has a Web-based user interface that supports temporal and spatial analyses. RODS processes sales of over-the-counter health care products in a similar manner but receives such data in batch mode on a daily basis. RODS was used during the 2002 Winter Olympics and currently operates in two states-Pennsylvania and Utah. It has been and continues to be a resource for implementing, evaluating, and applying new methods of public health surveillance.

  15. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  16. Depth of manual dismantling analysis: a cost-benefit approach.

    PubMed

    Achillas, Ch; Aidonis, D; Vlachokostas, Ch; Karagiannidis, A; Moussiopoulos, N; Loulos, V

    2013-04-01

    This paper presents a decision support tool for manufacturers and recyclers towards end-of-life strategies for waste electrical and electronic equipment. A mathematical formulation based on the cost benefit analysis concept is herein analytically described in order to determine the parts and/or components of an obsolete product that should be either non-destructively recovered for reuse or be recycled. The framework optimally determines the depth of disassembly for a given product, taking into account economic considerations. On this basis, it embeds all relevant cost elements to be included in the decision-making process, such as recovered materials and (depreciated) parts/components, labor costs, energy consumption, equipment depreciation, quality control and warehousing. This tool can be part of the strategic decision-making process in order to maximize profitability or minimize end-of-life management costs. A case study to demonstrate the models' applicability is presented for a typical electronic product in terms of structure and material composition. Taking into account the market values of the pilot product's components, the manual disassembly is proven profitable with the marginal revenues from recovered reusable materials to be estimated at 2.93-23.06 €, depending on the level of disassembly. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Semantic web data warehousing for caGrid

    PubMed Central

    McCusker, James P; Phillips, Joshua A; Beltrán, Alejandra González; Finkelstein, Anthony; Krauthammer, Michael

    2009-01-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG® Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399

  18. Medical Big Data Warehouse: Architecture and System Design, a Case Study: Improving Healthcare Resources Distribution.

    PubMed

    Sebaa, Abderrazak; Chikh, Fatima; Nouicer, Amina; Tari, AbdelKamel

    2018-02-19

    The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.

  19. Screening of the aerodynamic and biophysical properties of barley malt

    NASA Astrophysics Data System (ADS)

    Ghodsvali, Alireza; Farzaneh, Vahid; Bakhshabadi, Hamid; Zare, Zahra; Karami, Zahra; Mokhtarian, Mohsen; Carvalho, Isabel. S.

    2016-10-01

    An understanding of the aerodynamic and biophysical properties of barley malt is necessary for the appropriate design of equipment for the handling, shipping, dehydration, grading, sorting and warehousing of this strategic crop. Malting is a complex biotechnological process that includes steeping; germination and finally, the dehydration of cereal grains under controlled temperature and humidity conditions. In this investigation, the biophysical properties of barley malt were predicted using two models of artificial neural networks as well as response surface methodology. Stepping time and germination time were selected as the independent variables and 1 000 kernel weight, kernel density and terminal velocity were selected as the dependent variables (responses). The obtained outcomes showed that the artificial neural network model, with a logarithmic sigmoid activation function, presents more precise results than the response surface model in the prediction of the aerodynamic and biophysical properties of produced barley malt. This model presented the best result with 8 nodes in the hidden layer and significant correlation coefficient values of 0.783, 0.767 and 0.991 were obtained for responses one thousand kernel weight, kernel density, and terminal velocity, respectively. The outcomes indicated that this novel technique could be successfully applied in quantitative and qualitative monitoring within the malting process.

  20. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  1. Revealing biological information using data structuring and automated learning.

    PubMed

    Mohorianu, Irina; Moulton, Vincent

    2010-11-01

    The intermediary steps between a biological hypothesis, concretized in the input data, and meaningful results, validated using biological experiments, commonly employ bioinformatics tools. Starting with storage of the data and ending with a statistical analysis of the significance of the results, every step in a bioinformatics analysis has been intensively studied and the resulting methods and models patented. This review summarizes the bioinformatics patents that have been developed mainly for the study of genes, and points out the universal applicability of bioinformatics methods to other related studies such as RNA interference. More specifically, we overview the steps undertaken in the majority of bioinformatics analyses, highlighting, for each, various approaches that have been developed to reveal details from different perspectives. First we consider data warehousing, the first task that has to be performed efficiently, optimizing the structure of the database, in order to facilitate both the subsequent steps and the retrieval of information. Next, we review data mining, which occupies the central part of most bioinformatics analyses, presenting patents concerning differential expression, unsupervised and supervised learning. Last, we discuss how networks of interactions of genes or other players in the cell may be created, which help draw biological conclusions and have been described in several patents.

  2. Using detailed inter-network simulation and model abstraction to investigate and evaluate joint battlespace infosphere (JBI) support technologies

    NASA Astrophysics Data System (ADS)

    Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.

    2004-08-01

    The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.

  3. Using routinely gathered data to evaluate locally led service improvements.

    PubMed

    Stoddart, Gilly; Gale, Robert; Peat, Chantelle; McInnes, Sarah

    2011-07-01

    Background Between 2009 and 2010 NHS Ealing tested the feasibility of a) combining data from more than one data-domain at the same time to quantify patient movement across the primary care/acute hospital boundary, and b) establishing online analyses so they can be constantly updated with near real-time data to compare different subsets of patients. The reports allowed us to see: changes in hospital admissions before and after referral to community matrons of patients with complex conditions from one practice-based commissioning (PBC) groupchanges in hospital bed-days of all patients from one practice or PBC group during a complex intervention designed to assist inter-disciplinary collaboration. Results The teams leading the projects found that the reports gave them confidence in the projects and helped to influence local policy. Discussion GP consortia need to evaluate complex service improvements in order to contain costs and improve quality. They will find such reports helpful to give ongoing feedback and this may help to keep people engaged. Present plans for data warehousing in London do not have this ability - they do not combine data from across the whole health economy, and are focused either on claims validation or risk stratification.

  4. Relational-database model for improving quality assurance and process control in a composite manufacturing environment

    NASA Astrophysics Data System (ADS)

    Gentry, Jeffery D.

    2000-05-01

    A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.

  5. BIOZON: a system for unification, management and analysis of heterogeneous biological data.

    PubMed

    Birkland, Aaron; Yona, Golan

    2006-02-15

    Integration of heterogeneous data types is a challenging problem, especially in biology, where the number of databases and data types increase rapidly. Amongst the problems that one has to face are integrity, consistency, redundancy, connectivity, expressiveness and updatability. Here we present a system (Biozon) that addresses these problems, and offers biologists a new knowledge resource to navigate through and explore. Biozon unifies multiple biological databases consisting of a variety of data types (such as DNA sequences, proteins, interactions and cellular pathways). It is fundamentally different from previous efforts as it uses a single extensive and tightly connected graph schema wrapped with hierarchical ontology of documents and relations. Beyond warehousing existing data, Biozon computes and stores novel derived data, such as similarity relationships and functional predictions. The integration of similarity data allows propagation of knowledge through inference and fuzzy searches. Sophisticated methods of query that span multiple data types were implemented and first-of-a-kind biological ranking systems were explored and integrated. The Biozon system is an extensive knowledge resource of heterogeneous biological data. Currently, it holds more than 100 million biological documents and 6.5 billion relations between them. The database is accessible through an advanced web interface that supports complex queries, "fuzzy" searches, data materialization and more, online at http://biozon.org.

  6. Mining free-text medical records for companion animal enteric syndrome surveillance.

    PubMed

    Anholt, R M; Berezowski, J; Jamal, I; Ribble, C; Stephen, C

    2014-03-01

    Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Motor vehicle manufacturing and prostate cancer.

    PubMed

    Brown, D A; Delzell, E

    2000-07-01

    The purpose of this investigation was to evaluate the relation between employment in motor vehicle manufacturing (MVM) and fatal prostate cancer. The study included 322 prostate cancer deaths occurring in 1973 through 1987 and 1,285 controls, selected from a cohort of 126,100 male MVM workers. Men employed in casting operations had an odds ratio of 1.5 (95% CI = 1. 1-2.0). The association was consistent across casting facilities and was attributable primarily to work in core and mold making (OR = 1.5, 95% CI = 1.1-2.2) and metal melting and pouring jobs (OR = 1.9, 95% CI = 1.0-3.6). Other results included ORs of 1.9 (95% CI = 1.0-3.7) for warehousing and distribution operations and 2.1 (95% CI = 1.2-3. 7) for electric and electronic equipment manufacturing. The latter two associations exhibited little internal consistency. The relationships seen in this study were weak and may have been due to chance. Core and mold making and metal melting and pouring foundry operations entail potential exposure to metal dusts and fumes, to polycyclic aromatic hydrocarbons (PAHs), and to other chemicals. However, associations between these exposures and prostate cancer have not been reported consistently, nor have other studies of foundry workers consistently noted an excess of prostate cancer. Copyright 2000 Wiley-Liss, Inc.

  8. Lignin, mitochondrial family, and photorespiratory transporter classification as case studies in using co-expression, co-response, and protein locations to aid in identifying transport functions

    PubMed Central

    Tohge, Takayuki; Fernie, Alisdair R.

    2014-01-01

    Whole genome sequencing and the relative ease of transcript profiling have facilitated the collection and data warehousing of immense quantities of expression data. However, a substantial proportion of genes are not yet functionally annotated a problem which is particularly acute for transport proteins. In Arabidopsis, for example, only a minor fraction of the estimated 700 intracellular transporters have been identified at the molecular genetic level. Furthermore it is only within the last couple of years that critical genes such as those encoding the final transport step required for the long distance transport of sucrose and the first transporter of the core photorespiratory pathway have been identified. Here we will describe how transcriptional coordination between genes of known function and non-annotated genes allows the identification of putative transporters on the premise that such co-expressed genes tend to be functionally related. We will additionally extend this to include the expansion of this approach to include phenotypic information from other levels of cellular organization such as proteomic and metabolomic data and provide case studies wherein this approach has successfully been used to fill knowledge gaps in important metabolic pathways and physiological processes. PMID:24672529

  9. Spatial-temporal clustering of companion animal enteric syndrome: detection and investigation through the use of electronic medical records from participating private practices.

    PubMed

    Anholt, R M; Berezowski, J; Robertson, C; Stephen, C

    2015-09-01

    There is interest in the potential of companion animal surveillance to provide data to improve pet health and to provide early warning of environmental hazards to people. We implemented a companion animal surveillance system in Calgary, Alberta and the surrounding communities. Informatics technologies automatically extracted electronic medical records from participating veterinary practices and identified cases of enteric syndrome in the warehoused records. The data were analysed using time-series analyses and a retrospective space-time permutation scan statistic. We identified a seasonal pattern of reports of occurrences of enteric syndromes in companion animals and four statistically significant clusters of enteric syndrome cases. The cases within each cluster were examined and information about the animals involved (species, age, sex), their vaccination history, possible exposure or risk behaviour history, information about disease severity, and the aetiological diagnosis was collected. We then assessed whether the cases within the cluster were unusual and if they represented an animal or public health threat. There was often insufficient information recorded in the medical record to characterize the clusters by aetiology or exposures. Space-time analysis of companion animal enteric syndrome cases found evidence of clustering. Collection of more epidemiologically relevant data would enhance the utility of practice-based companion animal surveillance.

  10. Evaluation and redesign of manual material handling in a vaccine production centre's warehouse.

    PubMed

    Torres, Yaniel; Viña, Silvio

    2012-01-01

    This study was conducted in a warehouse at a vaccine production centre where improvement to existing storage and working conditions were sought through the construction of a new refrigerated store section (2-8C°). Warehousing tasks were videotaped and ergonomics analysis tools were used to assess the risk of developing MSDs. Specifically, these tools were the Rapid Entire Body Assessment (REBA) and the NIOSH equation. The current plant layout was sketched and analyzed to find possible targets for improvement trough the application of general work space design and ergonomics principles. Seven of the eight postures evaluated with REBA had a total score between 8 and 10, meaning a high risk, and only one was at a medium risk level. Nine of the eleven manual material handling tasks analyzed with the NIOSH equation had a Lifting Index between 1.14 and 1.80 and two had a recommended weight limit of 0 kg, indicating a need for job redesign. Solutions included the redesign of shelves, the design of a two-step stair and a trolley with adjustable height; also, changes in work methods were proposed by introducing a two-workers lifting strategy and job rotation, and, finally, a restructuring of plant layout was completed.

  11. Teaching with Space: K-6 Aviation, Space and Technology Resource Guide

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Teaching with Space permits easy and quick identification of resources you will find most beneficial. This guide captures the essence of resources with applicability across the elementary curriculum. Specific product reviews and suggested uses in the classroom are provided to enable informed decision-making. Materials from NASA and the Federal Aviation Administration may be obtained in limited quantities at no cost from public domain sources when available. Pricing in this guide is based on duplication, warehousing, and overhead costs associated with distributing these items. Although this resource guide is a prototype guide distributed on a limited basis, we trust you will find it useful in locating quality instructional resources. Your suggestions and comments are most welcome, and will receive the fullest consideration as we work to expand and validate this guide for national distribution. Based on teacher criteria for quality, educational soundness, compatibility with the curriculum, ease of use, and affordability, the guide will be updated as new resources become available, and in response to teacher feedback. You may provide us with additional items for consideration at any time. We also are planning to develop a resource guide for middle and high school teachers, and your input is welcome for that effort too. This guide is just one way that space can help you in the classroom.

  12. Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context.

    PubMed

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Martínez, José

    2018-05-28

    The Internet of Things (IoT) has opened productive ways to cultivate soil with the use of low-cost hardware (sensors/actuators) and communication (Internet) technologies. Remote equipment and crop monitoring, predictive analytic, weather forecasting for crops or smart logistics and warehousing are some examples of these new opportunities. Nevertheless, farmers are agriculture experts but, usually, do not have experience in IoT applications. Users who use IoT applications must participate in its design, improving the integration and use. In this work, different industrial agricultural facilities are analysed with farmers and growers to design new functionalities based on IoT paradigms deployment. User-centred design model is used to obtain knowledge and experience in the process of introducing technology in agricultural applications. Internet of things paradigms are used as resources to facilitate the decision making. IoT architecture, operating rules and smart processes are implemented using a distributed model based on edge and fog computing paradigms. A communication architecture is proposed using these technologies. The aim is to help farmers to develop smart systems both, in current and new facilities. Different decision trees to automate the installation, designed by the farmer, can be easily deployed using the method proposed in this document.

  13. Towards More Nuanced Classification of NGOs and Their Services to Improve Integrated Planning across Disaster Phases

    PubMed Central

    Towe, Vivian L.; Acosta, Joie D.; Chandra, Anita

    2017-01-01

    Nongovernmental organizations (NGOs) are being integrated into U.S. strategies to expand the services that are available during health security threats like disasters. Identifying better ways to classify NGOs and their services could optimize disaster planning. We surveyed NGOs about the types of services they provided during different disaster phases. Survey responses were used to categorize NGO services as core—critical to fulfilling their organizational mission—or adaptive—services implemented during a disaster based on community need. We also classified NGOs as being core or adaptive types of organizations by calculating the percentage of each NGO’s services classified as core. Service types classified as core were mainly social services, while adaptive service types were those typically relied upon during disasters (e.g., warehousing, food services, etc.). In total, 120 NGOs were classified as core organizations, meaning they mainly provided the same services across disaster phases, while 100 NGOs were adaptive organizations, meaning their services changed. Adaptive NGOs were eight times more likely to report routinely participating in disaster planning as compared to core NGOs. One reason for this association may be that adaptive NGOs are more aware of the changing needs in their communities across disaster phases because of their involvement in disaster planning. PMID:29160810

  14. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  15. EnsMart: A Generic System for Fast and Flexible Access to Biological Data

    PubMed Central

    Kasprzyk, Arek; Keefe, Damian; Smedley, Damian; London, Darin; Spooner, William; Melsopp, Craig; Hammond, Martin; Rocca-Serra, Philippe; Cox, Tony; Birney, Ewan

    2004-01-01

    The EnsMart system (www.ensembl.org/EnsMart) provides a generic data warehousing solution for fast and flexible querying of large biological data sets and integration with third-party data and tools. The system consists of a query-optimized database and interactive, user-friendly interfaces. EnsMart has been applied to Ensembl, where it extends its genomic browser capabilities, facilitating rapid retrieval of customized data sets. A wide variety of complex queries, on various types of annotations, for numerous species are supported. These can be applied to many research problems, ranging from SNP selection for candidate gene screening, through cross-species evolutionary comparisons, to microarray annotation. Users can group and refine biological data according to many criteria, including cross-species analyses, disease links, sequence variations, and expression patterns. Both tabulated list data and biological sequence output can be generated dynamically, in HTML, text, Microsoft Excel, and compressed formats. A wide range of sequence types, such as cDNA, peptides, coding regions, UTRs, and exons, with additional upstream and downstream regions, can be retrieved. The EnsMart database can be accessed via a public Web site, or through a Java application suite. Both implementations and the database are freely available for local installation, and can be extended or adapted to `non-Ensembl' data sets. PMID:14707178

  16. An agent-based approach to modelling the effects of extreme events on global food prices

    NASA Astrophysics Data System (ADS)

    Schewe, Jacob; Otto, Christian; Frieler, Katja

    2015-04-01

    Extreme climate events such as droughts or heat waves affect agricultural production in major food producing regions and therefore can influence the price of staple foods on the world market. There is evidence that recent dramatic spikes in grain prices were at least partly triggered by actual and/or expected supply shortages. The reaction of the market to supply changes is however highly nonlinear and depends on complex and interlinked processes such as warehousing, speculation, and export restrictions. Here we present for the first time an agent-based modelling framework that accounts, in simplified terms, for these processes and allows to estimate the reaction of world food prices to supply shocks on a short (monthly) timescale. We test the basic model using observed historical supply, demand, and price data of wheat as a major food grain. Further, we illustrate how the model can be used in conjunction with biophysical crop models to assess the effect of future changes in extreme event regimes on the volatility of food prices. In particular, the explicit representation of storage dynamics makes it possible to investigate the potentially nonlinear interaction between simultaneous extreme events in different food producing regions, or between several consecutive events in the same region, which may both occur more frequently under future global warming.

  17. Geospatial decision support framework for critical infrastructure interdependency assessment

    NASA Astrophysics Data System (ADS)

    Shih, Chung Yan

    Critical infrastructures, such as telecommunications, energy, banking and finance, transportation, water systems and emergency services are the foundations of modern society. There is a heavy dependence on critical infrastructures at multiple levels within the supply chain of any good or service. Any disruptions in the supply chain may cause profound cascading effect to other critical infrastructures. A 1997 report by the President's Commission on Critical Infrastructure Protection states that a serious interruption in freight rail service would bring the coal mining industry to a halt within approximately two weeks and the availability of electric power could be reduced in a matter of one to two months. Therefore, this research aimed at representing and assessing the interdependencies between coal supply, transportation and energy production. A proposed geospatial decision support framework was established and applied to analyze interdependency related disruption impact. By utilizing the data warehousing approach, geospatial and non-geospatial data were retrieved, integrated and analyzed based on the transportation model and geospatial disruption analysis developed in the research. The results showed that by utilizing this framework, disruption impacts can be estimated at various levels (e.g., power plant, county, state, etc.) for preventative or emergency response efforts. The information derived from the framework can be used for data mining analysis (e.g., assessing transportation mode usages; finding alternative coal suppliers, etc.).

  18. Superfund record of decision amendment (EPA Region 5): H. Brown Company, Inc., Grand Rapids, MI, February 25, 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This decision document amends the September 29, 1995, Record of Decision (ROD) Amendment for the H. Brown Co., Inc. site, in Walker, Michigan. The major components of the selected remedy include: Consolidating contaminated surface soil and sediment requiring cleanup onto the H. Brown property (2200 Turner Avenue N.W.); Redevelopment of the site, by private parties, with warehousing facilities constructed above the contaminated soil; A cover system comprised of clean fill to develop appropriate grades and elevations, concrete slab foundations, asphalt parking areas, and landscaped areas; Long-term maintenance of the cover system to ensure that the cover will continue to preventmore » direct contact with contaminated soil and minimize infiltration of precipitation; Long-term monitoring of the shallow and intermediate aquifers to monitor the effectiveness of the remedy; Monitoring and/or treatment of landfill gas; Restricting the use of the land and the groundwater; Demolishing on-site buildings to accommodate redevelopment; and Cleanup standards for soil will remain the same as in the 1992 ROD. The purpose of this ROD Amendment is to facilitate the re-development of the H. Brown Co., Inc. Site, and if re-development does not occur or proves to be unsuccessful then the remedy selected in the September 29, 1995 ROD Amendment will be implemented.« less

  19. An RFID-based luggage and passenger tracking system for airport security control applications

    NASA Astrophysics Data System (ADS)

    Vastianos, George E.; Kyriazanos, Dimitris M.; Kountouriotis, Vassilios I.; Thomopoulos, Stelios C. A.

    2014-06-01

    Market analysis studies of recent years have shown a steady and significant increase in the usage of RFID technology. Key factors for this growth were the decreased costs of passive RFIDs and their improved performance compared to the other identification technologies. Besides the benefits of RFID technologies into the supply chains, warehousing, traditional inventory and asset management applications, RFID has proven itself worth exploiting on experimental, as well as on commercial level in other sectors, such as healthcare, transport and security. In security sector, airport security is one of the biggest challenges. Airports are extremely busy public places and thus prime targets for terrorism, with aircraft, passengers, crew and airport infrastructure all subject to terrorist attacks. Inside this labyrinth of security challenges, the long range detection capability of the UHF passive RFID technology can be turned into a very important tracking tool that may outperform all the limitations of the barcode tracking inside the current airport security control chain. The Integrated Systems Lab of NCSR Demokritos has developed an RFID based Luggage and Passenger tracking system within the TASS (FP7-SEC-2010-241905) EU research project. This paper describes application scenarios of the system categorized according to the structured nature of the environment, the system architecture and presents evaluation results extracted from measurements with a group of different massive production GEN2 UHF RFID tags that are widely available in the world market.

  20. Rapid prototyping strategy for a surgical data warehouse.

    PubMed

    Tang, S-T; Huang, Y-F; Hsiao, M-L; Yang, S-H; Young, S-T

    2003-01-01

    Healthcare processes typically generate an enormous volume of patient information. This information largely represents unexploited knowledge, since current hospital operational systems (e.g., HIS, RIS) are not suitable for knowledge exploitation. Data warehousing provides an attractive method for solving these problems, but the process is very complicated. This study presents a novel strategy for effectively implementing a healthcare data warehouse. This study adopted the rapid prototyping (RP) method, which involves intensive interactions. System developers and users were closely linked throughout the life cycle of the system development. The presence of iterative RP loops meant that the system requirements were increasingly integrated and problems were gradually solved, such that the prototype system evolved into the final operational system. The results were analyzed by monitoring the series of iterative RP loops. First a definite workflow for ensuring data completeness was established, taking a patient-oriented viewpoint when collecting the data. Subsequently the system architecture was determined for data retrieval, storage, and manipulation. This architecture also clarifies the relationships among the novel system and legacy systems. Finally, a graphic user interface for data presentation was implemented. Our results clearly demonstrate the potential for adopting an RP strategy in the successful establishment of a healthcare data warehouse. The strategy can be modified and expanded to provide new services or support new application domains. The design patterns and modular architecture used in the framework will be useful in solving problems in different healthcare domains.

  1. A systematic review of workplace ergonomic interventions with economic analyses.

    PubMed

    Tompa, Emile; Dolinschi, Roman; de Oliveira, Claire; Amick, Benjamin C; Irvin, Emma

    2010-06-01

    This article reports on a systematic review of workplace ergonomic interventions with economic evaluations. The review sought to answer the question: "what is the credible evidence that incremental investment in ergonomic interventions is worth undertaking?" Past efforts to synthesize evidence from this literature have focused on effectiveness, whereas this study synthesizes evidence on the cost-effectiveness/financial merits of such interventions. Through a structured journal database search, 35 intervention studies were identified in nine industrial sectors. A qualitative synthesis approach, known as best evidence synthesis, was used rather than a quantitative approach because of the diversity of study designs and statistical analyses found across studies. Evidence on the financial merits of interventions was synthesized by industrial sector. In the manufacturing and warehousing sector strong evidence was found in support of the financial merits of ergonomic interventions from a firm perspective. In the administrative support and health care sectors moderate evidence was found, in the transportation sector limited evidence, and in remaining sectors insufficient evidence. Most intervention studies focus on effectiveness. Few consider their financial merits. Amongst the few that do, several had exemplary economic analyses, although more than half of the studies had low quality economic analyses. This may be due to the low priority given to economic analysis in this literature. Often only a small part of the overall evaluation of many studies focused on evaluating their cost-effectiveness.

  2. Pattern recognition and image processing for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.; Eastwood, DeLyle

    1999-12-01

    Pattern recognition (PR) and signal/image processing methods are among the most powerful tools currently available for noninvasively examining spectroscopic and other chemical data for environmental monitoring. Using spectral data, these systems have found a variety of applications employing analytical techniques for chemometrics such as gas chromatography, fluorescence spectroscopy, etc. An advantage of PR approaches is that they make no a prior assumption regarding the structure of the patterns. However, a majority of these systems rely on human judgment for parameter selection and classification. A PR problem is considered as a composite of four subproblems: pattern acquisition, feature extraction, feature selection, and pattern classification. One of the basic issues in PR approaches is to determine and measure the features useful for successful classification. Selection of features that contain the most discriminatory information is important because the cost of pattern classification is directly related to the number of features used in the decision rules. The state of the spectral techniques as applied to environmental monitoring is reviewed. A spectral pattern classification system combining the above components and automatic decision-theoretic approaches for classification is developed. It is shown how such a system can be used for analysis of large data sets, warehousing, and interpretation. In a preliminary test, the classifier was used to classify synchronous UV-vis fluorescence spectra of relatively similar petroleum oils with reasonable success.

  3. Bringing Web 2.0 to bioinformatics.

    PubMed

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  4. Non-fatal occupational falls on the same level.

    PubMed

    Yeoh, Han T; Lockhart, Thurmon E; Wu, Xuefang

    2013-01-01

    The purpose of this study was to describe antecedents and characteristics of same level fall injuries. Fall incidents and costs were compiled from the Bureau of Labor Statistics and other sources from 2006-2010. This study indicated that over 29% of 'fall on same level' injuries resulted in 31 or more workdays lost. The major source of injury was 'floors, walkways or ground surfaces', and the most affected body parts were the lower extremities and the trunk. With regard to gender and age, female workers had the highest risk of falls, while advancing age coincided with an increase in incidence rates. Overall, workers in the healthcare and social assistance industry, the transportation and warehousing industry, and the accommodation and food services industry had the highest risk for 'fall on same level' injuries. Furthermore, the overall compensation cost increased by 25% from 2006-2009. Along with existing evidence, these results may facilitate the design and implementation of preventative measures in the workplace and potentially reduce fall-related compensation costs. This research presents a unique and detailed analysis of non-fatal 'fall on same level' injuries in a large population of workers from various private industries in the USA. This information can be used to prioritise designing and implementing preventive measures and to provide workers with the understanding of risk factors associated with falls in the workplace.

  5. Construction and management of ARDS/sepsis registry with REDCap.

    PubMed

    Pang, Xiaoqing; Kozlowski, Natascha; Wu, Sulong; Jiang, Mei; Huang, Yongbo; Mao, Pu; Liu, Xiaoqing; He, Weiqun; Huang, Chaoyi; Li, Yimin; Zhang, Haibo

    2014-09-01

    The study aimed to construct and manage an acute respiratory distress syndrome (ARDS)/sepsis registry that can be used for data warehousing and clinical research. The workflow methodology and software solution of research electronic data capture (REDCap) was used to construct the ARDS/sepsis registry. Clinical data from ARDS and sepsis patients registered to the intensive care unit (ICU) of our hospital formed the registry. These data were converted to the electronic case report form (eCRF) format used in REDCap by trained medical staff. Data validation, quality control, and database management were conducted to ensure data integrity. The clinical data of 67 patients registered to the ICU between June 2013 and December 2013 were analyzed. Of the 67 patients, 45 (67.2%) were classified as sepsis, 14 (20.9%) as ARDS, and eight (11.9%) as sepsis-associated ARDS. The patients' information, comprising demographic characteristics, medical history, clinical interventions, daily assessment, clinical outcome, and follow-up data, was properly managed and safely stored in the ARDS/sepsis registry. Data efficiency was guaranteed by performing data collection and data entry twice weekly and every two weeks, respectively. The ARDS/sepsis database that we constructed and manage with REDCap in the ICU can provide a solid foundation for translational research on the clinical data of interest, and a model for development of other medical registries in the future.

  6. Combining knowledge discovery from databases (KDD) and case-based reasoning (CBR) to support diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Stranieri, Andrew; Yearwood, John; Pham, Binh

    1999-07-01

    The development of data warehouses for the storage and analysis of very large corpora of medical image data represents a significant trend in health care and research. Amongst other benefits, the trend toward warehousing enables the use of techniques for automatically discovering knowledge from large and distributed databases. In this paper, we present an application design for knowledge discovery from databases (KDD) techniques that enhance the performance of the problem solving strategy known as case- based reasoning (CBR) for the diagnosis of radiological images. The problem of diagnosing the abnormality of the cervical spine is used to illustrate the method. The design of a case-based medical image diagnostic support system has three essential characteristics. The first is a case representation that comprises textual descriptions of the image, visual features that are known to be useful for indexing images, and additional visual features to be discovered by data mining many existing images. The second characteristic of the approach presented here involves the development of a case base that comprises an optimal number and distribution of cases. The third characteristic involves the automatic discovery, using KDD techniques, of adaptation knowledge to enhance the performance of the case based reasoner. Together, the three characteristics of our approach can overcome real time efficiency obstacles that otherwise mitigate against the use of CBR to the domain of medical image analysis.

  7. Understanding data requirements of retrospective studies.

    PubMed

    Shenvi, Edna C; Meeker, Daniella; Boxwala, Aziz A

    2015-01-01

    Usage of data from electronic health records (EHRs) in clinical research is increasing, but there is little empirical knowledge of the data needed to support multiple types of research these sources support. This study seeks to characterize the types and patterns of data usage from EHRs for clinical research. We analyzed the data requirements of over 100 retrospective studies by mapping the selection criteria and study variables to data elements of two standard data dictionaries, one from the healthcare domain and the other from the clinical research domain. We also contacted study authors to validate our results. The majority of variables mapped to one or to both of the two dictionaries. Studies used an average of 4.46 (range 1-12) data element types in the selection criteria and 6.44 (range 1-15) in the study variables. The most frequently used items (e.g., procedure, condition, medication) are often available in coded form in EHRs. Study criteria were frequently complex, with 49 of 104 studies involving relationships between data elements and 22 of the studies using aggregate operations for data variables. Author responses supported these findings. The high proportion of mapped data elements demonstrates the significant potential for clinical data warehousing to facilitate clinical research. Unmapped data elements illustrate the difficulty in developing a complete data dictionary. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Users' information-seeking behavior on a medical library Website

    PubMed Central

    Rozic-Hristovski, Anamarija; Hristovski, Dimitar; Todorovski, Ljupco

    2002-01-01

    The Central Medical Library (CMK) at the Faculty of Medicine, University of Ljubljana, Slovenia, started to build a library Website that included a guide to library services and resources in 1997. The evaluation of Website usage plays an important role in its maintenance and development. Analyzing and exploring regularities in the visitors' behavior can be used to enhance the quality and facilitate delivery of information services, identify visitors' interests, and improve the server's performance. The analysis of the CMK Website users' navigational behavior was carried out by analyzing the Web server log files. These files contained information on all user accesses to the Website and provided a great opportunity to learn more about the behavior of visitors to the Website. The majority of the available tools for Web log file analysis provide a predefined set of reports showing the access count and the transferred bytes grouped along several dimensions. In addition to the reports mentioned above, the authors wanted to be able to perform interactive exploration and ad hoc analysis and discover trends in a user-friendly way. Because of that, we developed our own solution for exploring and analyzing the Web logs based on data warehousing and online analytical processing technologies. The analytical solution we developed proved successful, so it may find further application in the field of Web log file analysis. We will apply the findings of the analysis to restructuring the CMK Website. PMID:11999179

  9. Clinical Bioinformatics: challenges and opportunities

    PubMed Central

    2012-01-01

    Background Network Tools and Applications in Biology (NETTAB) Workshops are a series of meetings focused on the most promising and innovative ICT tools and to their usefulness in Bioinformatics. The NETTAB 2011 workshop, held in Pavia, Italy, in October 2011 was aimed at presenting some of the most relevant methods, tools and infrastructures that are nowadays available for Clinical Bioinformatics (CBI), the research field that deals with clinical applications of bioinformatics. Methods In this editorial, the viewpoints and opinions of three world CBI leaders, who have been invited to participate in a panel discussion of the NETTAB workshop on the next challenges and future opportunities of this field, are reported. These include the development of data warehouses and ICT infrastructures for data sharing, the definition of standards for sharing phenotypic data and the implementation of novel tools to implement efficient search computing solutions. Results Some of the most important design features of a CBI-ICT infrastructure are presented, including data warehousing, modularity and flexibility, open-source development, semantic interoperability, integrated search and retrieval of -omics information. Conclusions Clinical Bioinformatics goals are ambitious. Many factors, including the availability of high-throughput "-omics" technologies and equipment, the widespread availability of clinical data warehouses and the noteworthy increase in data storage and computational power of the most recent ICT systems, justify research and efforts in this domain, which promises to be a crucial leveraging factor for biomedical research. PMID:23095472

  10. Optimal Chunking of Large Multidimensional Arrays for Data Warehousing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron

    2008-02-15

    Very large multidimensional arrays are commonly used in data intensive scientific computations as well as on-line analytical processingapplications referred to as MOLAP. The storage organization of such arrays on disks is done by partitioning the large global array into fixed size sub-arrays called chunks or tiles that form the units of data transfer between disk and memory. Typical queries involve the retrieval of sub-arrays in a manner that access all chunks that overlap the query results. An important metric of the storage efficiency is the expected number of chunks retrieved over all such queries. The question that immediately arises is"whatmore » shapes of array chunks give the minimum expected number of chunks over a query workload?" The problem of optimal chunking was first introduced by Sarawagi and Stonebraker who gave an approximate solution. In this paper we develop exact mathematical models of the problem and provide exact solutions using steepest descent and geometric programming methods. Experimental results, using synthetic and real life workloads, show that our solutions are consistently within than 2.0percent of the true number of chunks retrieved for any number of dimensions. In contrast, the approximate solution of Sarawagi and Stonebraker can deviate considerably from the true result with increasing number of dimensions and also may lead to suboptimal chunk shapes.« less

  11. Stormwater Pollution Prevention Plan for the TA-60-02 Salvage Warehouse, Los Alamos National Laboratory, Revision 3, January 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgin, Jillian Elizabeth

    This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. The applicable stormwater discharge permit is EPA General Permit Registration Number NMR053915 (Los Alamos National Security (LANS) (U.S. EPA, June 2015). Contents of the Junemore » 4, 2015 Multi-sector General Permit can be viewed at: https://www.epa.gov/sites/production/files/2015- 10/documents/msgp2015_finalpermit.pdf This SWPPP applies to discharges of stormwater from the operational areas of the TA-60-02 Salvage and Warehouse facility at Los Alamos National Laboratory. Los Alamos National Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60-02 Salvage/ Warehouse and associated areas. The current permit expires at midnight on June 4, 2020. A copy of the facility NOI and LANS Delegation of Authority Letter are located in Appendix C of this SWPPP.« less

  12. Assessing Space and Satellite Environment and System Security

    NASA Astrophysics Data System (ADS)

    Haith, G.; Upton, S.

    Satellites and other spacecraft are key assets and critical vulnerabilities in our communications, surveillance and defense infrastructure. Despite their strategic importance, there are significant gaps in our real-time knowledge of satellite security. One reason is the lack of infrastructure and applications to filter and process the overwhelming amounts of relevant data. Some efforts are addressing this challenge by fusing the data gathered from ground, air and space based sensors to detect and categorize anomalous situations. The aim is to provide decision support for Space Situational Awareness (SSA) and Defensive Counterspace (DCS). Most results have not yielded estimates of impact and cost of a given situation or suggested courses of action (level 3 data fusion). This paper describes an effort to provide high level data fusion for SSA/DCS though two complementary thrusts: threat scenario simulation with Automatic Red Teaming (ART), and historical data warehousing and mining. ART uses stochastic search algorithms (e.g., evolutionary algorithms) to evolve strategies in agent based simulations. ART provides techniques to formally specify anomalous condition scenarios envisioned by subject matter experts and to explore alternative scenarios. The simulation data can then support impact estimates and course of action evaluations. The data mining thrust has focused on finding correlations between subsystems anomalies on MightySat II and publicly available space weather data. This paper describes the ART approach, some potential correlations discovered between satellite subsystem anomalies and space weather events, and future work planned on the project.

  13. Moby and Moby 2: creatures of the deep (web).

    PubMed

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  14. [Benzimidazole and its derivatives--from fungicides to designer drugs. A new occupational and environmental hazards].

    PubMed

    Lutz, Piotr

    2012-01-01

    Benzimidazole and benzimidazole derivatives play an important role in controlling various fungal pathogens. The benzimidazoles are also used to treat nematode and trematode infections in humans and animals. It acts by binding to the microtubules and stopping hyphal growth. It also binds to the spindle microtubules and blocks nuclear division. The most popular fungicide is carbendazim. The fungicide is used to control plant diseases in cereals and fruits. Laboratory studies have shown that carbendazim cause infertility and destroy the testicles of laboratory animals. Other benzimidazole derivatives are used as a preservative in paint, textile, papermaking, leather industry, and warehousing practices, as well as a preservative of fruits. Occupational exposure to benzimidazole may occur through inhalation and dermal contact with those compounds at workplaces where benzimidazole is used or produced. Some of the benzimidazoles are common environmental pollutants. They are often found in food and fruit products. Some of the benzimidazoles, like a astemizole or esomeprazole have found applications in diverse therapeutical areas. Despite of the clear advantages afforded by the use of benzimidazole derivatives, they share a danger potential. The most hazardous, however, are new illegally synthesed psychoactive drugs known as designer drugs. Some of them, like nitazene, etonitazene or clonitazene belong to benzimidazole derivatives. Laboratory animal studies revealed that etonitazene produced very similar effects in central nervous system as those observed after morphine administration. Considering etonitazene's properties, it seems reasonable to expected that long-term exposure to other benzimidazole derivatives may result in drug abuse and development of drug dependence.

  15. Relationship of condom strength to failure during use.

    PubMed

    1980-10-01

    Less-than-ideal environmental conditions, especially in developing countries with tropical or desert climates, prolonged storage times because of unpredictable supply and distribution, and inexperience with warehousing and logistics causing haphazard turnover of stocks can accelerate deterioration of condoms and render them unsuitable for use. As condom strength standards have never been related directly to failure during use, a Program for the Introduction and Adaptation of Contraceptive Technology (PIACT) study, in collaboration with Planned Parenthood of Seattle-King County, Washington, was conducted to determine the actual relationship between condom strength and failure during use (see July 1980 issue of Contraception). The study found that: 1) air burst test parameters can effectively and sensitively measure changes in condom strength; 2) condoms produced by Western industrial standards exceed by a wide margin the minimum strength required for effective use; and 3) stored condoms should not necessarily be thrown out if they are uniform in strength, even though they fall below accepted standards for new condoms. The study also brought out the issue of condom packaging. The potent deteriorating effect of ultraviolet light on condoms is well-known, and it is therefore suggested that condoms be packaged in foil or opaque laminates on both sides. A separate study requested by the U.S. Agency for International Development investigating the relationship between the 2 tests for condom strength (air burst standards as used in the PIACT study and tensile strength measurements) showed that air burst data and tensile strength parameters closely reflected the same characteristics, thus providing support for the use of air burst strength measurements for predicting useful life of stored condoms.

  16. Two phase genetic algorithm for vehicle routing and scheduling problem with cross-docking and time windows considering customer satisfaction

    NASA Astrophysics Data System (ADS)

    Baniamerian, Ali; Bashiri, Mahdi; Zabihi, Fahime

    2018-03-01

    Cross-docking is a new warehousing policy in logistics which is widely used all over the world and attracts many researchers attention to study about in last decade. In the literature, economic aspects has been often studied, while one of the most significant factors for being successful in the competitive global market is improving quality of customer servicing and focusing on customer satisfaction. In this paper, we introduce a vehicle routing and scheduling problem with cross-docking and time windows in a three-echelon supply chain that considers customer satisfaction. A set of homogeneous vehicles collect products from suppliers and after consolidation process in the cross-dock, immediately deliver them to customers. A mixed integer linear programming model is presented for this problem to minimize transportation cost and early/tardy deliveries with scheduling of inbound and outbound vehicles to increase customer satisfaction. A two phase genetic algorithm (GA) is developed for the problem. For investigating the performance of the algorithm, it was compared with exact and lower bound solutions in small and large-size instances, respectively. Results show that there are at least 86.6% customer satisfaction by the proposed method, whereas customer satisfaction in the classical model is at most 33.3%. Numerical examples results show that the proposed two phase algorithm could achieve optimal solutions in small-size instances. Also in large-size instances, the proposed two phase algorithm could achieve better solutions with less gap from the lower bound in less computational time in comparison with the classic GA.

  17. Short sleep duration among workers--United States, 2010.

    PubMed

    2012-04-27

    Insufficient sleep can have serious and sometimes fatal consequences for fatigued workers and others around them. For example, an estimated 20% of vehicle crashes are linked to drowsy driving. The National Sleep Foundation recommends that healthy adults sleep 7-9 hours per day. To assess the prevalence of short sleep duration among workers, CDC analyzed data from the 2010 National Health Interview Survey (NHIS). The analysis compared sleep duration by age group, race/ethnicity, sex, marital status, education, and employment characteristics. Overall, 30.0% of civilian employed U.S. adults (approximately 40.6 million workers) reported an average sleep duration of ≤6 hours per day. The prevalence of short sleep duration (≤6 hours per day) varied by industry of employment (range: 24.1%-41.6%), with a significantly higher rate of short sleep duration among workers in manufacturing (34.1%) compared with all workers combined. Among all workers, those who usually worked the night shift had a much higher prevalence of short sleep duration (44.0%, representing approximately 2.2 million night shift workers) than those who worked the day shift (28.8%, representing approximately 28.3 million day shift workers). An especially high prevalence of short sleep duration was reported by night shift workers in the transportation and warehousing (69.7%) and health-care and social assistance (52.3%) industries. Targeted interventions, such as evidence-based shift system designs that improve sleep opportunities and evidence-based training programs on sleep and working hours tailored for managers and employees, should be implemented to protect the health and safety of workers, their coworkers, and the public.

  18. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    PubMed

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  19. A weight based genetic algorithm for selecting views

    NASA Astrophysics Data System (ADS)

    Talebian, Seyed H.; Kareem, Sameem A.

    2013-03-01

    Data warehouse is a technology designed for supporting decision making. Data warehouse is made by extracting large amount of data from different operational systems; transforming it to a consistent form and loading it to the central repository. The type of queries in data warehouse environment differs from those in operational systems. In contrast to operational systems, the analytical queries that are issued in data warehouses involve summarization of large volume of data and therefore in normal circumstance take a long time to be answered. On the other hand, the result of these queries must be answered in a short time to enable managers to make decisions as short time as possible. As a result, an essential need in this environment is in improving the performances of queries. One of the most popular methods to do this task is utilizing pre-computed result of queries. In this method, whenever a new query is submitted by the user instead of calculating the query on the fly through a large underlying database, the pre-computed result or views are used to answer the queries. Although, the ideal option would be pre-computing and saving all possible views, but, in practice due to disk space constraint and overhead due to view updates it is not considered as a feasible choice. Therefore, we need to select a subset of possible views to save on disk. The problem of selecting the right subset of views is considered as an important challenge in data warehousing. In this paper we suggest a Weighted Based Genetic Algorithm (WBGA) for solving the view selection problem with two objectives.

  20. Customer and household matching: resolving entity identity in data warehouses

    NASA Astrophysics Data System (ADS)

    Berndt, Donald J.; Satterfield, Ronald K.

    2000-04-01

    The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.

  1. Warehousing re-annotated cancer genes for biomarker meta-analysis.

    PubMed

    Orsini, M; Travaglione, A; Capobianco, E

    2013-07-01

    Translational research in cancer genomics assigns a fundamental role to bioinformatics in support of candidate gene prioritization with regard to both biomarker discovery and target identification for drug development. Efforts in both such directions rely on the existence and constant update of large repositories of gene expression data and omics records obtained from a variety of experiments. Users who interactively interrogate such repositories may have problems in retrieving sample fields that present limited associated information, due for instance to incomplete entries or sometimes unusable files. Cancer-specific data sources present similar problems. Given that source integration usually improves data quality, one of the objectives is keeping the computational complexity sufficiently low to allow an optimal assimilation and mining of all the information. In particular, the scope of integrating intraomics data can be to improve the exploration of gene co-expression landscapes, while the scope of integrating interomics sources can be that of establishing genotype-phenotype associations. Both integrations are relevant to cancer biomarker meta-analysis, as the proposed study demonstrates. Our approach is based on re-annotating cancer-specific data available at the EBI's ArrayExpress repository and building a data warehouse aimed to biomarker discovery and validation studies. Cancer genes are organized by tissue with biomedical and clinical evidences combined to increase reproducibility and consistency of results. For better comparative evaluation, multiple queries have been designed to efficiently address all types of experiments and platforms, and allow for retrieval of sample-related information, such as cell line, disease state and clinical aspects. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Using Informatics and the Electronic Medical Record to Describe Antimicrobial Use in the Clinical Management of Diarrhea Cases at 12 Companion Animal Practices

    PubMed Central

    Anholt, R. Michele; Berezowski, John; Ribble, Carl S.; Russell, Margaret L.; Stephen, Craig

    2014-01-01

    Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%–95.4%) and specificity (85%, 95% confidence interval, 80.2%–89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use. PMID:25057893

  3. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  4. Uncovering the historic environmental hazards of urban brownfields.

    PubMed

    Litt, Jill S; Burke, Thomas A

    2002-12-01

    In Baltimore, over 1,000 vacant industrial sites persist across its urban landscape, yet little is known about the potential environmental health risks that may undermine future cleanup and redevelopment activities and the health of those in communities near these sites. This study examined the characteristics of urban brownfield properties in southeast Baltimore, Maryland, and screened sites for their potential environmental hazards. In addition, demographic and health data were evaluated to profile the social and health status of those in brownfield communities. The results show that brownfields in southeast Baltimore represent a range of historic operations, including metal smelting, oil refining, warehousing, and transportation, as well as paints, plastics, and metals manufacturing. The screening method identified a range of substances associated with these properties, including heavy metals, chlorinated hydrocarbons, and polycyclic aromatic hydrocarbons, all of which are suspected or recognized toxicants, and many of which are persistent in the environment. Spatially, these sites are concentrated in white, working class neighborhoods in which poverty levels exceed and educational attainment lags behind state and national averages. Moreover, these sites are concentrated in communities in which excess mortality rates due to respiratory disease, cancer, and heart disease exist when compared to the city, state, and national averages. This investigation demonstrated the usefulness of historic archives, real estate records, regulatory files, and national hazard-tracking systems based on standard industrial classification (SIC) to screen brownfield properties for their hazard potential. This analysis provides the foundation for further site monitoring and testing, cleanup and redevelopment priority setting, risk management strategies, and neighborhood planning, and it illustrates the need for increased health surveillance and disease prevention strategies in affected communities.

  5. Examining urban brownfields through the public health "macroscope".

    PubMed

    Litt, Jill S; Tran, Nga L; Burke, Thomas A

    2002-04-01

    Efforts to cope with the legacy of our industrial cities--blight, poverty, environmental degradation, ailing communities--have galvanized action across the public and private sectors to move vacant industrial land, also referred to as brownfields, to productive use; to curb sprawling development outside urban areas; and to reinvigorate urban communities. Such efforts, however, may be proceeding without thorough investigations into the environmental health and safety risks associated with industrial brownfields properties and the needs of affected neighborhoods. We describe an approach to characterize vacant and underused industrial and commercial properties in Southeast Baltimore and the health and well being of communities living near these properties. The screening algorithm developed to score and rank properties in Southeast Baltimore (n= 182) showed that these sites are not benign. The historical data revealed a range of hazardous operations, including metal smelting, oil refining, warehousing, and transportation, as well as paints, plastics, and metals manufacturing. The data also identified hazardous substances linked to these properties, including heavy metals, solvents, polycyclic aromatic hydrocarbons, plasticizers, and insecticides, all of which are suspected or recognized toxicants and many of which are persistent in the environment. The health analysis revealed disparities across Southeast Baltimore communities, including excess deaths from respiratory illness (lung cancer, chronic obstructive pulmonary disease, influenza, and pneumonia), total cancers, and a "leading cause of death" index and a spatial and statistical relationship between environmentally degraded brownfields areas and at-risk communities. Brownfields redevelopment is a key component of our national efforts to address environmental justice and health disparities across urban communities and is critical to urban revitalization. Incorporating public health into brownfields-related cleanup and land-use decisions will increase the odds for successful neighborhood redevelopment and long-term public health benefits.

  6. Examining urban brownfields through the public health "macroscope".

    PubMed Central

    Litt, Jill S; Tran, Nga L; Burke, Thomas A

    2002-01-01

    Efforts to cope with the legacy of our industrial cities--blight, poverty, environmental degradation, ailing communities--have galvanized action across the public and private sectors to move vacant industrial land, also referred to as brownfields, to productive use; to curb sprawling development outside urban areas; and to reinvigorate urban communities. Such efforts, however, may be proceeding without thorough investigations into the environmental health and safety risks associated with industrial brownfields properties and the needs of affected neighborhoods. We describe an approach to characterize vacant and underused industrial and commercial properties in Southeast Baltimore and the health and well being of communities living near these properties. The screening algorithm developed to score and rank properties in Southeast Baltimore (n= 182) showed that these sites are not benign. The historical data revealed a range of hazardous operations, including metal smelting, oil refining, warehousing, and transportation, as well as paints, plastics, and metals manufacturing. The data also identified hazardous substances linked to these properties, including heavy metals, solvents, polycyclic aromatic hydrocarbons, plasticizers, and insecticides, all of which are suspected or recognized toxicants and many of which are persistent in the environment. The health analysis revealed disparities across Southeast Baltimore communities, including excess deaths from respiratory illness (lung cancer, chronic obstructive pulmonary disease, influenza, and pneumonia), total cancers, and a "leading cause of death" index and a spatial and statistical relationship between environmentally degraded brownfields areas and at-risk communities. Brownfields redevelopment is a key component of our national efforts to address environmental justice and health disparities across urban communities and is critical to urban revitalization. Incorporating public health into brownfields-related cleanup and land-use decisions will increase the odds for successful neighborhood redevelopment and long-term public health benefits. PMID:11929727

  7. Using informatics and the electronic medical record to describe antimicrobial use in the clinical management of diarrhea cases at 12 companion animal practices.

    PubMed

    Anholt, R Michele; Berezowski, John; Ribble, Carl S; Russell, Margaret L; Stephen, Craig

    2014-01-01

    Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%-95.4%) and specificity (85%, 95% confidence interval, 80.2%-89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use.

  8. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  9. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  10. Depth of manual dismantling analysis: A cost–benefit approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achillas, Ch., E-mail: c.achillas@ihu.edu.gr; Aidonis, D.; Vlachokostas, Ch.

    Highlights: ► A mathematical modeling tool for OEMs. ► The tool can be used by OEMs, recyclers of electr(on)ic equipment or WEEE management systems’ regulators. ► The tool makes use of cost–benefit analysis in order to determine the optimal depth of product disassembly. ► The reusable materials and the quantity of metals and plastics recycled can be quantified in an easy-to-comprehend manner. - Abstract: This paper presents a decision support tool for manufacturers and recyclers towards end-of-life strategies for waste electrical and electronic equipment. A mathematical formulation based on the cost benefit analysis concept is herein analytically described in ordermore » to determine the parts and/or components of an obsolete product that should be either non-destructively recovered for reuse or be recycled. The framework optimally determines the depth of disassembly for a given product, taking into account economic considerations. On this basis, it embeds all relevant cost elements to be included in the decision-making process, such as recovered materials and (depreciated) parts/components, labor costs, energy consumption, equipment depreciation, quality control and warehousing. This tool can be part of the strategic decision-making process in order to maximize profitability or minimize end-of-life management costs. A case study to demonstrate the models’ applicability is presented for a typical electronic product in terms of structure and material composition. Taking into account the market values of the pilot product’s components, the manual disassembly is proven profitable with the marginal revenues from recovered reusable materials to be estimated at 2.93–23.06 €, depending on the level of disassembly.« less

  11. Gut Microbiota Dysbiosis as Risk and Premorbid Factors of IBD and IBS Along the Childhood-Adulthood Transition.

    PubMed

    Putignani, Lorenza; Del Chierico, Federica; Vernocchi, Pamela; Cicala, Michele; Cucchiara, Salvatore; Dallapiccola, Bruno

    2016-02-01

    Gastrointestinal disorders, although clinically heterogeneous, share pathogenic mechanisms, including genetic susceptibility, impaired gut barrier function, altered microbiota, and environmental triggers (infections, social and behavioral factors, epigenetic control, and diet). Gut microbiota has been studied for inflammatory bowel disease (IBD) and irritable bowel syndrome (IBS) in either children or adults, while modifiable gut microbiota features, acting as risk and premorbid factors along the childhood-adulthood transition, have not been thoroughly investigated so far. Indeed, the relationship between variations of the entire host/microbiota/environmental scenario and clinical phenotypes is still not fully understood. In this respect, tracking gut dysbiosis grading may help deciphering host phenotype-genotype associations and microbiota shifts in an integrated top-down omics-based approach within large-scale pediatric and adult case-control cohorts. Large-scale gut microbiota signatures and host inflammation patterns may be integrated with dietary habits, under genetic and epigenetic constraints, providing gut dysbiosis profiles acting as risk predictors of IBD or IBS in preclinical cases. Tracking dysbiosis supports new personalized/stratified IBD and IBS prevention programmes, generating Decision Support System tools. They include (1) high risk or flare-up recurrence -omics-based dysbiosis profiles; (2) microbial and molecular biomarkers of health and disease; (3) -omics-based pipelines for laboratory medicine diagnostics; (4) health apps for self-management of score-based dietary profiles, which can be shared with clinicians for nutritional habit and lifestyle amendment; (5) -omics profiling data warehousing and public repositories for IBD and IBS profile consultation. Dysbiosis-related indexes can represent novel laboratory and clinical medicine tools preventing or postponing the disease, finally interfering with its natural history.

  12. The design and implementation of an open-source, data-driven cohort recruitment system: the Duke Integrated Subject Cohort and Enrollment Research Network (DISCERN).

    PubMed

    Ferranti, Jeffrey M; Gilbert, William; McCall, Jonathan; Shang, Howard; Barros, Tanya; Horvath, Monica M

    2012-06-01

    Failure to reach research subject recruitment goals is a significant impediment to the success of many clinical trials. Implementation of health-information technology has allowed retrospective analysis of data for cohort identification and recruitment, but few institutions have also leveraged real-time streams to support such activities. Duke Medicine has deployed a hybrid solution, The Duke Integrated Subject Cohort and Enrollment Research Network (DISCERN), that combines both retrospective warehouse data and clinical events contained in prospective Health Level 7 (HL7) messages to immediately alert study personnel of potential recruits as they become eligible. DISCERN analyzes more than 500000 messages daily in service of 12 projects. Users may receive results via email, text pages, or on-demand reports. Preliminary results suggest DISCERN's unique ability to reason over both retrospective and real-time data increases study enrollment rates while reducing the time required to complete recruitment-related tasks. The authors have introduced a preconfigured DISCERN function as a self-service feature for users. The DISCERN framework is adoptable primarily by organizations using both HL7 message streams and a data warehouse. More efficient recruitment may exacerbate competition for research subjects, and investigators uncomfortable with new technology may find themselves at a competitive disadvantage in recruitment. DISCERN's hybrid framework for identifying real-time clinical events housed in HL7 messages complements the traditional approach of using retrospective warehoused data. DISCERN is helpful in instances when the required clinical data may not be loaded into the warehouse and thus must be captured contemporaneously during patient care. Use of an open-source tool supports generalizability to other institutions at minimal cost.

  13. Did I Tell You That? Ethical Issues Related to Using Computational Methods to Discover Non-Disclosed Patient Characteristics

    PubMed Central

    Cato, Kenrick D; Bockting, Walter; Larson, Elaine

    2016-01-01

    Background Widespread availability of large data sets through warehousing of electronic health records coupled with increasingly sophisticated information technology and related statistical methods offer great potential for a variety of applications for health and disease surveillance, developing predictive models and advancing decision support for clinicians. However, use of such ‘big data’ mining and discovery techniques has also raised ethical issues such as how to balance privacy and autonomy with the wider public benefits of data sharing. More specifically, electronic data are being increasingly used to identify individual characteristics which can be useful for clinical prediction and management, but that were not previously disclosed to a clinician. This process in computer parlance is called electronic phenotyping, and has a number of ethical implications. Approach Using the Belmont Report’s principles of respect for persons, beneficence, and justice as a framework, we examined the ethical issues posed by electronic phenotyping. Findings Ethical issues identified include the ability of the patient to consent for the use of their information, the ability to suppress pediatric information, ensuring that the potential benefits justify the risks of harm to patients, and acknowledging that the clinician’s biases or stereotypes, conscious or unintended, may also become a factor in the therapeutic interaction. We illustrate these issues with two vignettes, using the person characteristic of gender minority status (i.e., transgender identity) and the health history characteristic of substance abuse. Conclusion Big data mining has the potential to uncover patient characteristics previously obscured which can provide clinicians with beneficial clinical information. Hence, ethical guidelines must be updated to ensure that electronic phenotyping supports the principles of respect for persons, beneficence, and justice. PMID:27534587

  14. An Assessment of Direct and Indirect Economic Losses of Climatic Extreme Events

    NASA Astrophysics Data System (ADS)

    Otto, C.; Willner, S. N.; Wenz, L.; Levermann, A.

    2015-12-01

    Risk of extreme weather events like storms, heat extremes, and floods has already risen due to anthropogenic climate change and is likely to increase further under future global warming. Additionally, the structure of the global economy has changed importantly in the last decades. In the process of globalization, local economies have become more and more interwoven forming a complex network. Together with a trend towards lean production, this has resulted in a strong dependency of local manufacturers on global supply and value added chains, which may render the economic network more vulnerable to climatic extremes; outages of local manufacturers trigger indirect losses, which spread along supply chains and can even outstrip direct losses. Accordingly, in a comprehensive climate risk assessment these inter-linkages should be considered. Here, we present acclimate, an agent based dynamic damage propagation model. Its agents are production and consumption sites, which are interlinked by economic flows accounting for the complexity as well as the heterogeneity of the global supply network. Assessing the economic response on the timescale of the adverse event, the model permits to study temporal and spatial evolution of indirect production losses during the disaster and in the subsequent recovery phase of the economy. In this study, we focus on the dynamic economic resilience defined here as the ratio of direct to total losses. This implies that the resilience of the system under consideration is low if the high indirect losses are high. We find and assess a nonlinear dependence of the resilience on the disaster size. Further, we analyze the influence of the network structure upon resilience and discuss the potential of warehousing as an adaptation option.

  15. ThaleMine: A Warehouse for Arabidopsis Data Integration and Discovery.

    PubMed

    Krishnakumar, Vivek; Contrino, Sergio; Cheng, Chia-Yi; Belyaeva, Irina; Ferlanti, Erik S; Miller, Jason R; Vaughn, Matthew W; Micklem, Gos; Town, Christopher D; Chan, Agnes P

    2017-01-01

    ThaleMine (https://apps.araport.org/thalemine/) is a comprehensive data warehouse that integrates a wide array of genomic information of the model plant Arabidopsis thaliana. The data collection currently includes the latest structural and functional annotation from the Araport11 update, the Col-0 genome sequence, RNA-seq and array expression, co-expression, protein interactions, homologs, pathways, publications, alleles, germplasm and phenotypes. The data are collected from a wide variety of public resources. Users can browse gene-specific data through Gene Report pages, identify and create gene lists based on experiments or indexed keywords, and run GO enrichment analysis to investigate the biological significance of selected gene sets. Developed by the Arabidopsis Information Portal project (Araport, https://www.araport.org/), ThaleMine uses the InterMine software framework, which builds well-structured data, and provides powerful data query and analysis functionality. The warehoused data can be accessed by users via graphical interfaces, as well as programmatically via web-services. Here we describe recent developments in ThaleMine including new features and extensions, and discuss future improvements. InterMine has been broadly adopted by the model organism research community including nematode, rat, mouse, zebrafish, budding yeast, the modENCODE project, as well as being used for human data. ThaleMine is the first InterMine developed for a plant model. As additional new plant InterMines are developed by the legume and other plant research communities, the potential of cross-organism integrative data analysis will be further enabled. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Hubble Space Telescope on-line telemetry archive for monitoring scientific instruments

    NASA Astrophysics Data System (ADS)

    Miebach, Manfred P.

    2002-12-01

    A major milestone in an effort to update the aging Hubble Space Telescope (HST) ground system was completed when HST operations were switched to a new ground system, a project called "Vision 2000 Control Center System CCS)", at the time of the third Servicing Mission in December 1999. A major CCS subsystem is the Space Telescope Engineering Data Store, the design of which is based on modern Data Warehousing technology. In fact, the Data Warehouse (DW) as implemented in the CCS Ground System that operates and monitors the Hubble Space Telescope represents, the first use of a commercial Data Warehouse to manage engineering data. By the end of February 2002, the process of populating the Data Warehouse with HST historical telemetry data had been completed, providing access to HST engineering data for a period of over 12 years with a current data volume of 2.8 Terabytes. This paper describes hands-on experience from an end user perspective, using the CCS system capabilities, including the Data Warehouse as an HST engineering telemetry archive. The Engineering Team at the Space Telescope Science Institute is using HST telemetry extensively for monitoring the Scientific Instruments, in particular for · Spacecraft anomaly resolutions · Scientific Instrument trending · Improvements of Instrument operational efficiency The overall idea is to maximize science output of the space observatory. Furthermore, the CCS provides a powerful feature to build, save, and recall real-time display pages customized to specific subsystems and operational scenarios. Engineering teams are using the real-time monitoring capabilities intensively during Servicing Missions and real time commanding to handle anomaly situations, while the Flight Operations Team (FOT) monitors the spacecraft around the clock.

  17. Mental disorders in Australian prisoners: a comparison with a community sample.

    PubMed

    Butler, Tony; Andrews, Gavin; Allnutt, Stephen; Sakashita, Chika; Smith, Nadine E; Basson, John

    2006-03-01

    The plight of those with mental health problems and the possible role of prisons in "warehousing" these individuals has received considerable media and political attention. Prisoners are generally excluded from community-based surveys and to date no studies have compared prisoners to the community. The objective was to examine whether excess psychiatric morbidity exists in prisoners compared to the general community after adjusting for demographics. Prison data were obtained from a consecutive sample of reception prisoners admitted into the state's correctional system in 2001 (n = 916). Community data were obtained from the 1997 Australian National Survey of Mental Health and Wellbeing (n = 8168). Mental health diagnoses were obtained using the Composite International Diagnostic Interview and a number of other screening measures. Weighting was used in calculating the 12-month prevalence estimates to control for demographic differences between the two samples. Logistic regression adjusting for age, sex and education was used to compare the prison and community samples. The 12-month prevalence of any psychiatric illness in the last year was 80% in prisoners and 31% in the community. Substantially more psychiatric morbidity was detected among prisoners than in the community group after accounting for demographic differences, particularly symptoms of psychosis (OR = 11.8, 95% CI 7.5-18.7), substance use disorders (OR = 11.4, 95% CI 9.7-13.6) and personality disorders (OR = 8.6, 95% CI 7.2-10.3). Mental functioning and disability score were worse for prisoners than the community except for physical health. This study found an overrepresentation of psychiatric morbidity in the prisoner population. Identifying the causes of this excess requires further investigation.

  18. Modelling and genetic algorithm based optimisation of inverse supply chain

    NASA Astrophysics Data System (ADS)

    Bányai, T.

    2009-04-01

    The design and control of recycling systems of products with environmental risk have been discussed in the world already for a long time. The main reasons to address this subject are the followings: reduction of waste volume, intensification of recycling of materials, closing the loop, use of less resource, reducing environmental risk [1, 2]. The development of recycling systems is based on the integrated solution of technological and logistic resources and know-how [3]. However the financial conditions of recycling systems is partly based on the recovery, disassembly and remanufacturing options of the used products [4, 5, 6], but the investment and operation costs of recycling systems can be characterised with high logistic costs caused by the geographically wide collection system with more collection level and a high number of operation points of the inverse supply chain. The reduction of these costs is a popular area of the logistics researches. These researches include the design and implementation of comprehensive environmental waste and recycling program to suit business strategies (global system), design and supply all equipment for production line collection (external system), design logistics process to suit the economical and ecological requirements (external system) [7]. To the knowledge of the author, there has been no research work on supply chain design problems that purpose is the logistics oriented optimisation of inverse supply chain in the case of non-linear total cost function consisting not only operation costs but also environmental risk cost. The antecedent of this research is, that the author has taken part in some research projects in the field of closed loop economy ("Closing the loop of electr(on)ic products and domestic appliances from product planning to end-of-life technologies), environmental friendly disassembly (Concept for logistical and environmental disassembly technologies) and design of recycling systems of household appliances (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a possible solution method. By the aid of analytical methods, the problem can not be solved, so a genetic algorithm based heuristic optimisation method was chosen to find the optimal solution. The input parameters of the optimisation are the followings: specific fixed, unit and environmental risk costs of the collection points of the inverse supply chain, specific warehousing and transportation costs and environmental risk costs of transportation. The output parameters are the followings: the number of objects in the different hierarchical levels of the collection system, infrastructure costs, logistics costs and environmental risk costs from used infrastructures, transportation and number of products recycled out of time. The next step of the research work was the application of the above mentioned method. The developed application makes it possible to define the input parameters of the real system, the graphical view of the chosen optimal solution in the case of the given input parameters, graphical view of the cost structure of the optimal solution, determination of the parameters of the algorithm (e.g. number of individuals, operators and termination conditions). The sensibility analysis of the objective function and the test results showed that the structure of the inverse supply chain depends on the proportion of the specific costs. Especially the proportion of the specific environmental risk costs influences the structure of the system and the number of objects at each hierarchical level of the collection system. The sensitivity analysis of the total cost function was performed in three cases. In the first case the effect of the proportion of specific infrastructure and logistics costs were analysed. If the infrastructure costs are significantly lower than the total costs of warehousing and transportation, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first level of the collection is not necessary, because it is replaceable by the more expensive transportation directly to the objects of the second or lower hierarchical level. In the second case the effect of the proportion of the logistics and environmental risk costs were analysed. In this case the analysis resulted to the followings: if the logistics costs are significantly higher than the total environmental risk costs, then because of the constant infrastructure costs the preference of logistics operations depends on the proportion of the environmental risk costs caused by of out of time recycled products and transportation. In the third case of the analysis the effect of the proportion of infrastructure and environmental risk costs were examined. If the infrastructure costs are significantly lower than the environmental risk costs, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first collection phase will be shifted near to the last hierarchical level of the supply chain to avoid a very high infrastructure set up and operation cost. The advantages of the presented model and solution method can be summarised in the followings: the model makes it possible to decide the structure of the inverse supply chain (which object to open or close); reduces infrastructure cost, especially for supply chain with high specific fixed costs; reduces the environmental risk cost through finding an optimal balance between number of objects of the system and out of time recycled products, reduces the logistics costs through determining the optimal quantitative parameters of material flow operations. The future of this research work is the use of differentiated lead-time, which makes it possible to take into consideration the above mentioned non-linear infrastructure, transportation, warehousing and environmental risk costs in the case of a given product portfolio segmented by lead-time. This publication was supported by the National Office for Research and Technology within the frame of Pázmány Péter programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Office for Research and Technology. Literature: [1] H. F. Lund: McGraw-Hill Recycling Handbook. McGraw-Hill. 2000. [2] P. T. Williams: Waste Treatment and Disposal. John Wiley and Sons Ltd. 2005. [3] M. Christopher: Logistics & Supply Chain Management: creating value-adding networks. Pearson Education [4] A. Gungor, S. M. Gupta: Issues in environmentally conscious manufacturing and product recovery: a survey. Computers & Industrial Engineering. Volume 36. Issue 4. 1999. pp. 811-853. [5] H. C. Zhang, T. C. Kuo, H. Lu, S. H. Huang: Environmentally conscious design and manufacturing: A state-of-the-art survey. Journal of Manufacturing Systems. Volume 16. Issue 5. 1997. pp. 352-371. [6] P. Veerakamolmal, S. Gupta: Design for Disassembly, Reuse, and Recycling. Green Electronics/Green Bottom Line. 2000. pp. 69-82. [7] A. Rushton, P. Croucher, P. Baker: The Handbook of Logistics and Distribution Management. Kogan P.page Limited. 2006. [8] H. Stadtler, C. Kilger: Supply Chain Management and Advanced Planning: Concepts, Models, Software, and Case Studies. Springer. 2005.

  19. GeneLab Analysis Working Group Kick-Off Meeting

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.

    2018-01-01

    Goals to achieve for GeneLab AWG - GL vision - Review of GeneLab AWG charter Timeline and milestones for 2018 Logistics - Monthly Meeting - Workshop - Internship - ASGSR Introduction of team leads and goals of each group Introduction of all members Q/A Three-tier Client Strategy to Democratize Data Physiological changes, pathway enrichment, differential expression, normalization, processing metadata, reproducibility, Data federation/integration with heterogeneous bioinformatics external databases The GLDS currently serves over 100 omics investigations to the biomedical community via open access. In order to expand the scope of metadata record searches via the GLDS, we designed a metadata warehouse that collects and updates metadata records from external systems housing similar data. To demonstrate the capabilities of federated search and retrieval of these data, we imported metadata records from three open-access data systems into the GLDS metadata warehouse: NCBI's Gene Expression Omnibus (GEO), EBI's PRoteomics IDEntifications (PRIDE) repository, and the Metagenomics Analysis server (MG-RAST). Each of these systems defines metadata for omics data sets differently. One solution to bridge such differences is to employ a common object model (COM) to which each systems' representation of metadata can be mapped. Warehoused metadata records are then transformed at ETL to this single, common representation. Queries generated via the GLDS are then executed against the warehouse, and matching records are shown in the COM representation (Fig. 1). While this approach is relatively straightforward to implement, the volume of the data in the omics domain presents challenges in dealing with latency and currency of records. Furthermore, the lack of a coordinated has been federated data search for and retrieval of these kinds of data across other open-access systems, so that users are able to conduct biological meta-investigations using data from a variety of sources. Such meta-investigations are key to corroborating findings from many kinds of assays and translating them into systems biology knowledge and, eventually, therapeutics.

  20. Benchmarking distributed data warehouse solutions for storing genomic variant information

    PubMed Central

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442

  1. Food Prices and Climate Extremes: A Model of Global Grain Price Variability with Storage

    NASA Astrophysics Data System (ADS)

    Otto, C.; Schewe, J.; Frieler, K.

    2015-12-01

    Extreme climate events such as droughts, floods, or heat waves affect agricultural production in major cropping regions and therefore impact the world market prices of staple crops. In the last decade, crop prices exhibited two very prominent price peaks in 2007-2008 and 2010-2011, threatening food security especially for poorer countries that are net importers of grain. There is evidence that these spikes in grain prices were at least partly triggered by actual supply shortages and the expectation of bad harvests. However, the response of the market to supply shocks is nonlinear and depends on complex and interlinked processes such as warehousing, speculation, and trade policies. Quantifying the contributions of such different factors to short-term price variability remains difficult, not least because many existing models ignore the role of storage which becomes important on short timescales. This in turn impedes the assessment of future climate change impacts on food prices. Here, we present a simple model of annual world grain prices that integrates grain stocks into the supply and demand functions. This firstly allows us to model explicitly the effect of storage strategies on world market price, and thus, for the first time, to quantify the potential contribution of trade policies to price variability in a simple global framework. Driven only by reported production and by long--term demand trends of the past ca. 40 years, the model reproduces observed variations in both the global storage volume and price of wheat. We demonstrate how recent price peaks can be reproduced by accounting for documented changes in storage strategies and trade policies, contrasting and complementing previous explanations based on different mechanisms such as speculation. Secondly, we show how the integration of storage allows long-term projections of grain price variability under climate change, based on existing crop yield scenarios.

  2. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  3. Data modeling and processing in deregulated power system

    NASA Astrophysics Data System (ADS)

    Xu, Lin

    The introduction of open electricity markets and the fast pace of changes brought by modern information technology bring both opportunities and challenges to the power industry. Vast quantities of data are generated by the underlying physical system and the business operations. Fast and low cost communications allow the data to be more widely accessed. For electric utilities, it is becoming clear that data and information are vital assets. Proper management and modeling of these assets is as essential to the engineering of the power system as is the underlying physical system. This dissertation introduces several new methods to address information modeling and data processing concerns in the new utility environment. Presently, legacy information systems in the industry do not make adequate use of the data produced. Hence, a new information infrastructure using data warehousing---a data integration technology used for decision support---is proposed for novel management and utilization of data. Detailed examples and discussion are given on the schema building, extract transform and load (ETL) strategies for power system specific data. The benefits of this approach are shown through a new viewpoint of state estimation. Inaccurate grid information, especially topology information, can be a major detriment to energy market traders' ability to make appropriate bids. A two-stage DC state estimation algorithm is presented to provide them with a simpler data viewpoint to make knowledgeable trading decisions. Numerical results show how the results of a DC state estimator can be accurately made available to all concerned. Additionally, the proposed communication and information infrastructure allow for new formulations and solutions to traditional power problems. In this vein, a new distributed communication model of the power system using publisher/subscriber paradigm is presented and simulated. The simulation results prove its feasibility and show it has adequate performance under today's communication technology. Based on this model, a new state estimation algorithm, which can decentralizes computations and minimizes communication overhead, is derived using a set of overlapping areas to cover the entire network. Numerical experiments show that it is efficient, robust, and has comparable accuracy as the conventional full network state estimation.

  4. An archiving system for Planetary Mapping Data - Availability of derived information and knowledge in Planetary Science!

    NASA Astrophysics Data System (ADS)

    Nass, A.

    2017-12-01

    Since the late 1950s a huge number of planetary missions started to explore our solar system. The data resulting from this robotic exploration and remote sensing varies in data type, resolution and target. After data preprocessing, and referencing, the released data are available for the community on different portals and archiving systems, e.g. PDS or PSA. One major usage for these data is mapping, i.e. the extraction and filtering of information by combining and visualizing different kind of base data. Mapping itself is conducted either for mission planning (e.g. identification of landing site) or fundamental research (e.g. reconstruction of surface). The mapping results for mission planning are directly managed within the mission teams. The derived data for fundamental research - also describable as maps, diagrams, or analysis results - are mainly project-based and exclusively available in scientific papers. Within the last year, first steps have been taken to ensure a sustainable use of these derived data by finding an archiving system comparable to the data portals, i.e. reusable, well-documented, and sustainable. For the implementation three tasks are essential. Two tasks have been treated in the past 1. Comparability and interoperability has been made possible by standardized recommendations for visual, textual, and structural description of mapping data. 2. Interoperability between users, information- and graphic systems is possible by templates and guidelines for digital GIS-based mapping. These two steps are adapted e.g. within recent mapping projects for the Dawn mission. The third task hasn`t been implemented thus far: Establishing an easily detectable and accessible platform that holds already acquired information and published mapping results for future investigations or mapping projects. An archive like this would support the scientific community significantly by a constant rise of knowledge and understanding based on recent discussions within Information Science and Management, and Data Warehousing. This contribution describes the necessary map archive components that have to be considered for an efficient establishment and user-oriented accessibility. It will be described how already existing developments could be used, and which components will have to be developed yet.

  5. Perceptions of a HIV testing message targeted for at-risk adults with low functional health literacy

    NASA Astrophysics Data System (ADS)

    Hunter, Susan L.

    This study analyses warehoused data collected by Georgia State University and Centers for Disease Control and Prevention (GSU/CDC) researchers after developing an HIV testing message for urban adults with low functional health literacy. It expands previous work by examining data collected when 202 primarily African-American homeless clients of an urban community based organization (CBO) reviewed both the low literacy brochure (Wallace et al., 2006) and a standard HIV brochure (Georgia Department of Human Resources, 1997). Participants' health literacy was assessed using 2 measures; the Rapid Estimate of Adult Literacy in Medicine or REALM (Davis, Crouch, Long & Green) and the Test of Functional Health Literacy Assessment or TOFHLA (Nurss, Parker & Baker, 2001). HIV risk was determined using an interview questionnaire developed by the research group (Belcher, Deming, Hunter & Wallace, 2005) which allowed participants to self-report recent alcohol and drug use, sexual behavior, sexually transmitted disease (STD) history and exposure to abuse and sexual coercion. Open-ended response questions regarding readability, understanding, main message, and importance for each brochure provided the qualitative data. This analysis confirms previous work showing accessibility, readability, cultural sensitivity and user-friendly formatting are important when attempting to engage at-risk adults with varying levels of functional health literacy in an HIV testing message. The visual aspects of the brochure can be essential in capturing the reader's attention and should be relevant to the target audience (Wallace, Deming, Hunter, Belcher & Choi, 2006). Mono-colored graphics may be perceived as dated and irrelevant or worse yet, threatening to some readers. Whenever possible culturally appropriate color photos of people depicting relevant content should replace excess text and difficult medical terms should be eliminated. Wording on the cover and within the brochure should be used to focus the reader on a single main message. This data also shows that many participants considered the quantity of information just as important. For reasons not elucidated here, many respondents equated quantity of information with message quality. Based on these results it is important to further clarify how much information is enough to maintain legitimacy and the reader's attention while simultaneously avoiding confusing mixed messages.

  6. Work-related fatal motor vehicle traffic crashes: Matching of 2010 data from the Census of Fatal Occupational Injuries and the Fatality Analysis Reporting System.

    PubMed

    Byler, Christen; Kesy, Laura; Richardson, Scott; Pratt, Stephanie G; Rodríguez-Acosta, Rosa L

    2016-07-01

    Motor vehicle traffic crashes (MVTCs) remain the leading cause of work-related fatal injuries in the United States, with crashes on public roadways accounting for 25% of all work-related deaths in 2012. In the United States, the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries (CFOI) provides accurate counts of fatal work injuries based on confirmation of work relationship from multiple sources, while the National Highway Traffic Safety Administration (NHTSA) Fatality Analysis Reporting System (FARS) provides detailed data on fatal MVTCs based on police reports. Characterization of fatal work-related MVTCs is currently limited by data sources that lack either data on potential risk factors (CFOI) or work-relatedness confirmation and employment characteristics (FARS). BLS and the National Institute for Occupational Safety and Health (NIOSH) collaborated to analyze a merged data file created by BLS using CFOI and FARS data. A matching algorithm was created to link 2010 data from CFOI and FARS using date of incident and other case characteristics, allowing for flexibility in variables to address coding discrepancies. Using the matching algorithm, 953 of the 1044 CFOI "Highway" cases (91%) for 2010 were successfully linked to FARS. Further analysis revealed systematic differences between cases identified as work-related by both systems and by CFOI alone. Among cases identified as work-related by CFOI alone, the fatally-injured worker was considerably more likely to have been employed outside the transportation and warehousing industry or transportation-related occupations, and to have been the occupant of a vehicle other than a heavy truck. This study is the first step of a collaboration between BLS, NHTSA, and NIOSH to improve the completeness and quality of data on fatal work-related MVTCs. It has demonstrated the feasibility and value of matching data on fatal work-related traffic crashes from CFOI and FARS. The results will lead to improvements in CFOI and FARS case capture, while also providing researchers with a better description of fatal work-related MVTCs than would be available from the two data sources separately. Copyright © 2016. Published by Elsevier Ltd.

  7. Challenges and strategies for implementing genomic services in diverse settings: experiences from the Implementing GeNomics In pracTicE (IGNITE) network.

    PubMed

    Sperber, Nina R; Carpenter, Janet S; Cavallari, Larisa H; J Damschroder, Laura; Cooper-DeHoff, Rhonda M; Denny, Joshua C; Ginsburg, Geoffrey S; Guan, Yue; Horowitz, Carol R; Levy, Kenneth D; Levy, Mia A; Madden, Ebony B; Matheny, Michael E; Pollin, Toni I; Pratt, Victoria M; Rosenman, Marc; Voils, Corrine I; W Weitzel, Kristen; Wilke, Russell A; Ryanne Wu, R; Orlando, Lori A

    2017-05-22

    To realize potential public health benefits from genetic and genomic innovations, understanding how best to implement the innovations into clinical care is important. The objective of this study was to synthesize data on challenges identified by six diverse projects that are part of a National Human Genome Research Institute (NHGRI)-funded network focused on implementing genomics into practice and strategies to overcome these challenges. We used a multiple-case study approach with each project considered as a case and qualitative methods to elicit and describe themes related to implementation challenges and strategies. We describe challenges and strategies in an implementation framework and typology to enable consistent definitions and cross-case comparisons. Strategies were linked to challenges based on expert review and shared themes. Three challenges were identified by all six projects, and strategies to address these challenges varied across the projects. One common challenge was to increase the relative priority of integrating genomics within the health system electronic health record (EHR). Four projects used data warehousing techniques to accomplish the integration. The second common challenge was to strengthen clinicians' knowledge and beliefs about genomic medicine. To overcome this challenge, all projects developed educational materials and conducted meetings and outreach focused on genomic education for clinicians. The third challenge was engaging patients in the genomic medicine projects. Strategies to overcome this challenge included use of mass media to spread the word, actively involving patients in implementation (e.g., a patient advisory board), and preparing patients to be active participants in their healthcare decisions. This is the first collaborative evaluation focusing on the description of genomic medicine innovations implemented in multiple real-world clinical settings. Findings suggest that strategies to facilitate integration of genomic data within existing EHRs and educate stakeholders about the value of genomic services are considered important for effective implementation. Future work could build on these findings to evaluate which strategies are optimal under what conditions. This information will be useful for guiding translation of discoveries to clinical care, which, in turn, can provide data to inform continual improvement of genomic innovations and their applications.

  8. T3SEdb: data warehousing of virulence effectors secreted by the bacterial Type III Secretion System.

    PubMed

    Tay, Daniel Ming Ming; Govindarajan, Kunde Ramamoorthy; Khan, Asif M; Ong, Terenze Yao Rui; Samad, Hanif M; Soh, Wei Wei; Tong, Minyan; Zhang, Fan; Tan, Tin Wee

    2010-10-15

    Effectors of Type III Secretion System (T3SS) play a pivotal role in establishing and maintaining pathogenicity in the host and therefore the identification of these effectors is important in understanding virulence. However, the effectors display high level of sequence diversity, therefore making the identification a difficult process. There is a need to collate and annotate existing effector sequences in public databases to enable systematic analyses of these sequences for development of models for screening and selection of putative novel effectors from bacterial genomes that can be validated by a smaller number of key experiments. Herein, we present T3SEdb http://effectors.bic.nus.edu.sg/T3SEdb, a specialized database of annotated T3SS effector (T3SE) sequences containing 1089 records from 46 bacterial species compiled from the literature and public protein databases. Procedures have been defined for i) comprehensive annotation of experimental status of effectors, ii) submission and curation review of records by users of the database, and iii) the regular update of T3SEdb existing and new records. Keyword fielded and sequence searches (BLAST, regular expression) are supported for both experimentally verified and hypothetical T3SEs. More than 171 clusters of T3SEs were detected based on sequence identity comparisons (intra-cluster difference up to ~60%). Owing to this high level of sequence diversity of T3SEs, the T3SEdb provides a large number of experimentally known effector sequences with wide species representation for creation of effector predictors. We created a reliable effector prediction tool, integrated into the database, to demonstrate the application of the database for such endeavours. T3SEdb is the first specialised database reported for T3SS effectors, enriched with manual annotations that facilitated systematic construction of a reliable prediction model for identification of novel effectors. The T3SEdb represents a platform for inclusion of additional annotations of metadata for future developments of sophisticated effector prediction models for screening and selection of putative novel effectors from bacterial genomes/proteomes that can be validated by a small number of key experiments.

  9. Quality assurance testing of acoustic doppler current profiler transform matrices

    USGS Publications Warehouse

    Armstrong, Brandy; Fulford, Janice M.; Thibodeaux, Kirk G.

    2015-01-01

    The U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) is nationally responsible for the design, testing, evaluation, repair, calibration, warehousing, and distribution of hydrologic instrumentation in use within the USGS Water Mission Area (WMA). The HIF's Hydraulic Laboratory has begun routine quality assurance (QA) testing and documenting the performance of every USGS WMA acoustic Doppler current profiler (ADCP) used for making velocity and discharge measurements. All existing ADCPs are being registered and tracked in a database maintained by the HIF, and called for QA checks in the HIF's Hydraulic Laboratory on a 3- year cycle. All new ADCPs purchased directly from the manufacturer as well as ADCPs sent to the HIF or the manufacturer for repair are being registered and tracked in the database and QA checked in the laboratory before being placed into service. Meters failing the QA check are sent directly to the manufacturer for repairs and rechecked by HIF or removed from service. Although this QA program is specific to the SonTek1 and Teledyne RD Instruments1, ADCPs most commonly used within the WMA, it is the intent of the USGS Office of Surface Water and the HIF to expand this program to include all bottom tracking ADCPs as they become available and more widely used throughout the WMA. As part of the HIF QA process, instruments are inspected for physical damage, the instrument must pass the ADCP diagnostic self-check tests, the temperature probe must be within ± 2 degrees Celsius of a National Institute of Standards and Technology traceable reference thermometer and the distance made good over a fixed distance must meet the manufacturer's specifications (+/-0.25% or +/-1% difference). The transform matrix is tested by conducting distance-made-good (DMG) tests comparing the straight-line distance from bottom tracking to the measured tow-track distance. The DMG test is conducted on each instrument twice in the forward and reverse directions (4 tows) at four orientations (16 total tows); with beam 1 orientated 0 degrees to the towing direction; turned 45 degrees to the towing direction; turned 90 degrees to the towing direction; and turned 135 degrees to the towing direction. All QA data files and summary results are archived. This paper documents methodology, participation and preliminary results of WMA ADCP QA testing.

  10. inTB - a data integration platform for molecular and clinical epidemiological analysis of tuberculosis

    PubMed Central

    2013-01-01

    Background Tuberculosis is currently the second highest cause of death from infectious diseases worldwide. The emergence of multi and extensive drug resistance is threatening to make tuberculosis incurable. There is growing evidence that the genetic diversity of Mycobacterium tuberculosis may have important clinical consequences. Therefore, combining genetic, clinical and socio-demographic data is critical to understand the epidemiology of this infectious disease, and how virulence and other phenotypic traits evolve over time. This requires dedicated bioinformatics platforms, capable of integrating and enabling analyses of this heterogeneous data. Results We developed inTB, a web-based system for integrated warehousing and analysis of clinical, socio-demographic and molecular data for Mycobacterium sp. isolates. As a database it can organize and display data from any of the standard genotyping methods (SNP, MIRU-VNTR, RFLP and spoligotype), as well as an extensive array of clinical and socio-demographic variables that are used in multiple countries to characterize the disease. Through the inTB interface it is possible to insert and download data, browse the database and search specific parameters. New isolates are automatically classified into strains according to an internal reference, and data uploaded or typed in is checked for internal consistency. As an analysis framework, the system provides simple, point and click analysis tools that allow multiple types of data plotting, as well as simple ways to download data for external analysis. Individual trees for each genotyping method are available, as well as a super tree combining all of them. The integrative nature of inTB grants the user the ability to generate trees for filtered subsets of data crossing molecular and clinical/socio-demografic information. inTB is built on open source software, can be easily installed locally and easily adapted to other diseases. Its design allows for use by research laboratories, hospitals or public health authorities. The full source code as well as ready to use packages is available at http://www.evocell.org/inTB. Conclusions To the best of our knowledge, this is the only system capable of integrating different types of molecular data with clinical and socio-demographic data, empowering researchers and clinicians with easy to use analysis tools that were not possible before. PMID:24001185

  11. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    PubMed

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to 15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement.

  12. MiMiR – an integrated platform for microarray data sharing, mining and analysis

    PubMed Central

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-01-01

    Background Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. Results A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. Conclusion The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies. PMID:18801157

  13. MiMiR--an integrated platform for microarray data sharing, mining and analysis.

    PubMed

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-09-18

    Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies.

  14. The Prevalence of Exposure to Workplace Secondhand Smoke in the United States: 2010 to 2015.

    PubMed

    Dai, Hongying; Hao, Jianqiang

    2017-11-01

    To compare changes in exposure to workplace secondhand smoke (SHS) by industry of employment and occupation from 2010 to 2015. Data were collected from 2010 and 2015 National Health Interview Survey. Weighted estimates of the prevalence of exposure to workplace SHS among currently working nonsmokers in 2010 (n = 12 627) and 2015 (n = 16 399) were compared. The prevalence of exposure to workplace SHS among currently working nonsmokers was 10.0% in 2015 and 9.5% in 2010. Exposure to workplace SHS is disproportionally high among male workers, young workers, non-Hispanic blacks, Hispanics, workers with low education and low income, and workers residing in the Southern United States. Tobacco control policies have effectively reduced exposure to workplace SHS in a few white-collar and service job categories but blue-collar workers remain to have a high prevalence of exposure to workplace SHS. From 2010 to 2015, "transportation and warehousing industries" had the largest increase in SHS exposure (13.3%-21.5%, p value = .004) and "arts, entertainment, and recreation industries" had the largest decline in prevalence of exposure to SHS (20.1%-11.5%, p value = .01). In the multivariate analysis, workers with service (aOR = 1.4, p < .0001) and blue-collar occupations (aOR = 2.5, p < .0001) had a significantly higher prevalence of exposure to workplace SHS than those with white-collar occupations. Disparities of SHS exposure by industry, occupation, and social demographic class continue to exist. Blue-collar workers, especially those working in "transportation and construction industries," along with young workers and workers in high risk social classes are priority groups for future workplace SHS prevention. An estimated 12.6 million working nonsmokers were regularly exposed to SHS at work in 2015. We compared the changes in prevalence of exposure to workplace SHS from 2010 to 2015 by social demographic class, industry of employment and occupation. Our findings could help inform the policymakers and health practitioners to establish stronger smoke-free air laws and conduct education campaigns to reduce the exposure to workplace SHS, especially among certain industries and occupations with a disproportionally high prevalence of exposure to workplace SHS. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Cost/CYP: a bottom line that helps keep CSM projects cost-efficient.

    PubMed

    1985-01-01

    In contraceptive social marketing (CSM), the objective is social good, but project managers also need to run a tight ship, trimming costs, allocating scarce funds, and monitoring their program's progress. 1 way CSM managers remain cost-conscious is through the concept of couple-years-of-protection (CYP). Devised 2 decades ago as an administrative tool to compare the effects of different contraceptive methods, CYP's uses have multiplied to include assessing program output and cost effectiveness. Some of the factors affecting cost/CYP are a project's age, sales volume, management efficiency, and product prices and line. These factors are interconnected. The cost/CYP figures given here do not include outlays for commodities. While the Agency for International Development's commodity costs alter slightly with each new purchase contrast, the agency reports that a condom costs about 4 cents (US), an oral contraceptive (OC) cycle about 12 cents, and a spermicidal tablet about 7 cents. CSM projects have relatively high start-up costs. Within a project's first 2 years, expenses must cover such marketing activities as research, packaging, warehousing, and heavy promotion. As a project ages, sales should grow, producing revenues that gradually amortize these initial costs. The Nepal CSM project provides an example of how cost/CYP can improve as a program ages. In 1978, the year sales began, the project's cost/CYP was about $84. For some time the project struggled to get its products to its target market and gradually overcome several major hurdles. The acquisition of jeeps eased distribution and, by adding another condom brand, sales were increased still more, bringing the cost/CYP down to $8.30 in 1981. With further sales increases and resulting revenues, the cost/CYP dropped to just over $7 in 1983. When the sales volume becomes large enough, CSM projects can achieve economies of scale, which greatly improves cost-efficiency. Fixed costs shrink as a proportion of total expenditures. Good project management goes hand-in-hand with increasing sales. Cost/CYP is a powerful tool, but some project strategies alter its meaning. Some projects have lowered net costs by selling products at high prices. This dilutes the social marketing credo of getting low-cost projects to those in need. When this occurs, cost/CYP undergoes an identity crisis, for it no longer measures a purely social objective.

  16. inTB - a data integration platform for molecular and clinical epidemiological analysis of tuberculosis.

    PubMed

    Soares, Patrícia; Alves, Renato J; Abecasis, Ana B; Penha-Gonçalves, Carlos; Gomes, M Gabriela M; Pereira-Leal, José B

    2013-08-30

    Tuberculosis is currently the second highest cause of death from infectious diseases worldwide. The emergence of multi and extensive drug resistance is threatening to make tuberculosis incurable. There is growing evidence that the genetic diversity of Mycobacterium tuberculosis may have important clinical consequences. Therefore, combining genetic, clinical and socio-demographic data is critical to understand the epidemiology of this infectious disease, and how virulence and other phenotypic traits evolve over time. This requires dedicated bioinformatics platforms, capable of integrating and enabling analyses of this heterogeneous data. We developed inTB, a web-based system for integrated warehousing and analysis of clinical, socio-demographic and molecular data for Mycobacterium sp. isolates. As a database it can organize and display data from any of the standard genotyping methods (SNP, MIRU-VNTR, RFLP and spoligotype), as well as an extensive array of clinical and socio-demographic variables that are used in multiple countries to characterize the disease. Through the inTB interface it is possible to insert and download data, browse the database and search specific parameters. New isolates are automatically classified into strains according to an internal reference, and data uploaded or typed in is checked for internal consistency. As an analysis framework, the system provides simple, point and click analysis tools that allow multiple types of data plotting, as well as simple ways to download data for external analysis. Individual trees for each genotyping method are available, as well as a super tree combining all of them. The integrative nature of inTB grants the user the ability to generate trees for filtered subsets of data crossing molecular and clinical/socio-demografic information. inTB is built on open source software, can be easily installed locally and easily adapted to other diseases. Its design allows for use by research laboratories, hospitals or public health authorities. The full source code as well as ready to use packages is available at http://www.evocell.org/inTB. To the best of our knowledge, this is the only system capable of integrating different types of molecular data with clinical and socio-demographic data, empowering researchers and clinicians with easy to use analysis tools that were not possible before.

  17. Supporting Research at NASA's Goddard Space Flight Center Through Focused Education and Outreach Programs

    NASA Astrophysics Data System (ADS)

    Ireton, F.; Closs, J.

    2003-12-01

    NASA research scientists work closely with Science Systems and Applications, Inc. (SSAI) personnel at Goddard Space Flight Center (GSFC) on a large variety of education and public outreach (E/PO) initiatives. This work includes assistance in conceptualizing E/PO plans, then carrying through in the development of materials, publication, cataloging, warehousing, and product distribution. For instance, outreach efforts on the Terra, Aqua, and Aura-still in development-EOS missions, as well as planetary and visualization programs, have been coordinated by SSAI employees. E/PO support includes convening and taking part in sessions at professional meetings and workshops. Also included is the coordination of exhibits at professional meetings such as the AGU, AAAS, AMS and educational meetings such as the National Science Teachers Association. Other E/PO efforts include the development and staffing of booths; arranges for booth space and furnishings; shipping of exhibition materials and products; assembling, stocking, and disassembling of booths. E/PO personnel work with organizations external to NASA such as the Smithsonian museum, Library of Congress, U.S. Geological Survey, and associations or societies such as the AGU, American Chemical Society, and National Science Teachers Association to develop products and programs that enhance NASA mission E/PO efforts or to provide NASA information for use in their programs. At GSFC, E/PO personnel coordinate the efforts of the education and public outreach sub-committees in support of the Space and Earth Sciences Data Analysis (SESDA) contract within the GSFC Earth Sciences Directorate. The committee acts as a forum for improving communication and coordination among related Earth science education projects, and strives to unify the representation of these programs among the science and education communities. To facilitate these goals a Goddard Earth Sciences Directorate Education and Outreach Portal has been developed to provide a repository and clearinghouse for upcoming education events, and a speaker's bureau. The committees are planning a series of workshops in the near future to expand participation, and further leverage respective Earth science education and outreach efforts through cooperative work with other NASA centers. Founded in 1977 as a minority, women-owned business, SSAI's staff includes a large and varied pool of scientists, E/PO employees covering a broad range of training and talents. SSAI provides support on a number of NASA related projects at Goddard Space Flight Center (GSFC) in Greenbelt, Maryland ranging from science research to data acquisition, storage, and distribution.

  18. The role of digital health in making progress toward Sustainable Development Goal (SDG) 3 in conflict-affected populations.

    PubMed

    Asi, Yara M; Williams, Cynthia

    2018-06-01

    The progress of the Millennium Development Goals (MDGs) shows that sustained global action can achieve success. Despite the unprecedented achievements in health and education, more than one billion people, many of them in conflict-affected areas, were unable to reap the benefits of the MDG gains. The recently developed Sustainable Development Goals (SDGs) are even more ambitious then their predecessor. SDG 3 prioritizes health and well-being for all ages in specific areas such as maternal mortality, communicable diseases, mental health, and healthcare workforce. However, without a shift in the approach used for conflict-affected areas, the world's most vulnerable people risk being left behind in global development yet again. We must engage in meaningful discussions about employing innovative strategies to address health challenges fragile, low-resource, and often remote settings. In this paper, we will argue that to meet the ambitious health goals of SDG 3, digital health can help to bridge healthcare gaps in conflict-affected areas. First, we describe the health needs of populations in conflict-affected environments, and how they overlap with the SDG 3 targets. Secondly, we discuss how digital health can address the unique needs of conflict-affected areas. Finally, we evaluate the various challenges in deploying digital technologies in fragile environments, and discuss potential policy solutions. Persons in conflict-affected areas may benefit from the diffusive nature of digital health tools. Innovations using cellular technology or cloud-based solutions overcome physical barriers. Additionally, many of the targets of SDG 3 could see significant progress if efficacious education and outreach efforts were supported, and digital health in the form of mHealth and telehealth offers a relatively low-resource platform for these initiatives. Lastly, lack of data collection, especially in conflict-affected or otherwise fragile states, was one of the primary limitations of the MDGs. Greater investment in data collection efforts, supported by digital health technologies, is necessary if SDG 3 targets are to be measured and progress assessed. Standardized EMR systems as well as context-specific data warehousing efforts will assist in collecting and managing accurate data. Stakeholders such as patients, providers, and NGOs, must be proactive and collaborative in their efforts for continuous progress toward SDG 3. Digital health can assist in these inter-organizational communication efforts. The SDGS are complex, ambitious, and comprehensive; even in the most stable environments, achieving full completion towards every goal will be difficult, and in conflict-affected environments, this challenge is much greater. By engaging in a collaborative framework and using the appropriate digital health tools, we can support humanitarian efforts to realize sustained progress in SDG 3 outcomes. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Improvement of logistics education from the point of view environmental management

    NASA Astrophysics Data System (ADS)

    Bányai, Á.

    2009-04-01

    The paper briefly presents the influence of environmental management on the improvement of the logistics education and research structure of the Department of Materials Handling and Logistics at the University of Miskolc, Hungary. The logistics, as an integrated science offers a very good possibility to demonstrate the effect of new innovative knowledge on the migration of the priorities of education and research of sciences. The importance of logistics in the field of recycling (or in wider sense in the field of environmental management) can be justified by the high proportion of logistic costs (as investment and operation costs) and these costs show that optimum logistic solutions are able to decrease the financial outcomes and lead to the establishment of a profitable system. Technological change constantly creates new demands on both education and research. The most important objective of the department is to create a unique logistics education in the country. For this reason the department offered up-to-date integrated knowledge at all level: undergraduate, master degree and PhD education. The integration of logistics means traditionally the joint use of technology of material handling, method of material flow, technology method of traffic, information technology, management sciences, production technology, marketing, market research, technology of services, mathematics and optimization, communication technology, system engineering, electronics and automation, mechatronics [1, 3]. The education and research portfolio of the department followed this tradition till 1993. The new lectures in the field of sustainability (logistics of recycling, logistics of quality management and recycling, closed loop economy, EU logistics or global logistics) became more and more important in the logistics education. The results of fast developments in closed loop economy, recycling, waste management, environmental protection are more and more used in the industry and this effected a revolutionary change in the education and research structure of logistics [2]. The European Community policy in the environment sectors aims at a high level of protection. Four principles were defined: the precautionary principle, the principle that preventive action should be taken, that environmental damages should as a priority be rectified at source and that the polluter should pay. All of these four principles have a very strong logistics background, especially in the field of import/export operations, traffic/transportation, inventory control, materials handling, fleet operations, customer service, supply chain management, distribution, strategic planning, warehousing, information systems of logistics, purchasing. These facts effect the development of different topics of logistics in each field of the education of the department: collection logistics of used products (especially WEEE), optimization of collection systems, design and control of disassembly systems, distribution of fractions of disassembled used products, design and control of recycling parks, possibilities of virtual networks in the field of recycling logistics, integration of logistics, recycling and total quality management, identification systems and recycling, etc. Within the framework of different supports our department has the opportunity to take part in European networks and research projects in the field of sustainability, environmental protection, recycling and closed loop economy. One of the biggest networks was developed within the framework of a Brite-Euram project entitled ‘Closing the loop from the product design to the end of life technologies'. The importance of logistics is certified by the fact, that this network defined the milestones of the improvement of an economically beneficial closed loop economy as quality aspects, communication and marketing, logistics and qualification. Within the frame of this project the logistics focused on the improvement of technologies (disassembly, reuse, refurbishment, remanufacturing and recycling), collection systems, and development of the concept for collection logistics and pre-disassembly, market survey in waste management. The Regional Knowledge Centre of Mechatronics and Logistics Systems was established in 2005. The overall objective of Knowledge Centre is to develop knowledge-intensive mechatronics and logistics systems in the leading edge of the world and to integrate the results in the economy and society through utilising the knowledge. The realisation of the objective requires the establishment and operation of a networking system of relations between those involved in sciences, the economy and society. The knowledge centre is a "knowledge integration tool" of the university in the field of mechanical engineering, and plays an important part in the intensification of the integration of the philosophy of sustainability into the related sciences. The program of the knowledge centre is focused on three well definable strategic fields, which are the vertical elements of the model. These are the R&D programs: world of products, materials and technologies, and integrated systems. The programs cover the implementation of seven, internationally competitive, application-oriented part tasks. These seven part tasks and the sustainability are closely related. The realisation of the part tasks through networking offers considerable results and economical-ecological benefits, forth for the participants and the region. The activities include basic and applied research, experimental development, technology transfer, as well as education and training and preparing the new scientific generation. The horizontal elements of the model are given by the utilisation of knowledge that can be interpreted in different dimensions: technical/engineering, legal, sustainability, economic, and social. The program relies on the continuation of existing relations in networks, and its regional nature is embodied in the cooperation of the higher education institutes and companies of the three counties. This publication was supported by the National Office for Research and Technology within the frame of Pázmány Péter programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Office for Research and Technology. Literature: [1] J. Cselényi, Gy. Fischer, J. Murvai, B. Mang: Typical models of the recycling logistics of worn out product. Proceedings of XIV. International Conference on Material Handling and Warehousing in Belgrade, 1996. pp. 138-143. [2] R. Knoth, M. Hoffmann, B. Kopacek, P. Kopacek: A logistic concept to improve the re-usability of electric and electronic equipment, Electronics and the Environment, 2001. Proceedings of the 2001 IEEE International Symposium. 2001. pp. 115 - 118. [3] L. Cser, B. Mang: Cleaner Technologies and Recycling in Hungary. Proceedings of Int. Workshop on Environmental Conscious Manufacturing in Hertogenbosch, The Netherlands, 1997. pp. 48-56.

  20. A Decade’s Experience With Quality Improvement in Cardiac Surgery Using the Veterans Affairs and Society of Thoracic Surgeons National Databases

    PubMed Central

    Grover, Frederick L.; Shroyer, A. Laurie W.; Hammermeister, Karl; Edwards, Fred H.; Ferguson, T. Bruce; Dziuban, Stanley W.; Cleveland, Joseph C.; Clark, Richard E.; McDonald, Gerald

    2001-01-01

    Objective To review the Department of Veteran Affairs (VA) and the Society of Thoracic Surgeons (STS) national databases over the past 10 years to evaluate their relative similarities and differences, to appraise their use as quality improvement tools, and to assess their potential to facilitate improvements in quality of cardiac surgical care. Summary Background Data The VA developed a mandatory risk-adjusted database in 1987 to monitor outcomes of cardiac surgery at all VA medical centers. In 1989 the STS developed a voluntary risk-adjusted database to help members assess quality and outcomes in their individual programs and to facilitate improvements in quality of care. Methods A short data form on every veteran operated on at each VA medical center is completed and transmitted electronically for analysis of unadjusted and risk-adjusted death and complications, as well as length of stay. Masked, confidential semiannual reports are then distributed to each program’s clinical team and the associated administrator. These reports are also reviewed by a national quality oversight committee. Thus, VA data are used both locally for quality improvement and at the national level with quality surveillance. The STS dataset (217 core fields and 255 extended fields) is transmitted for each patient semiannually to the Duke Clinical Research Institute (DCRI) for warehousing, analysis, and distribution. Site-specific reports are produced with regional and national aggregate comparisons for unadjusted and adjusted surgical deaths and complications, as well as length of stay for coronary artery bypass grafting (CABG), valvular procedures, and valvular/CABG procedures. Both databases use the logistic regression modeling approach. Data for key processes of care are also captured in both databases. Research projects are frequently carried out using each database. Results More than 74,000 and 1.6 million cardiac surgical patients have been entered into the VA and STS databases, respectively. Risk factors that predict surgical death for CABG are very similar in the two databases, as are the odds ratios for most of the risk factors. One major difference is that the VA is 99% male, the STS 71% male. Both databases have shown a significant reduction in the risk-adjusted surgical death rate during the past decade despite the fact that patients have presented with an increased risk factor profile. The ratio of observed to expected deaths decreased from 1.05 to 0.9 for the VA and from 1.5 to 0.9 for the STS. Conclusion It appears that the routine feedback of risk-adjusted data on local performance provided by these programs heightens awareness and leads to self-examination and self-assessment, which in turn improves quality and outcomes. This general quality improvement template should be considered for application in other settings beyond cardiac surgery. PMID:11573040

  1. Building a Cloud Infrastructure for a Virtual Environmental Observatory

    NASA Astrophysics Data System (ADS)

    El-khatib, Y.; Blair, G. S.; Gemmell, A. L.; Gurney, R. J.

    2012-12-01

    Environmental science is often fragmented: data is collected by different organizations using mismatched formats and conventions, and models are misaligned and run in isolation. Cloud computing offers a lot of potential in the way of resolving such issues by supporting data from different sources and at various scales, and integrating models to create more sophisticated and collaborative software services. The Environmental Virtual Observatory pilot (EVOp) project, funded by the UK Natural Environment Research Council, aims to demonstrate how cloud computing principles and technologies can be harnessed to develop more effective solutions to pressing environmental issues. The EVOp infrastructure is a tailored one constructed from resources in both private clouds (owned and managed by us) and public clouds (leased from third party providers). All system assets are accessible via a uniform web service interface in order to enable versatile and transparent resource management, and to support fundamental infrastructure properties such as reliability and elasticity. The abstraction that this 'everything as a service' principle brings also supports mashups, i.e. combining different web services (such as models) and data resources of different origins (in situ gauging stations, warehoused data stores, external sources, etc.). We adopt the RESTful style of web services in order to draw a clear line between client and server (i.e. cloud host) and also to keep the server completely stateless. This significantly improves the scalability of the infrastructure and enables easy infrastructure management. For instance, tasks such as load balancing and failure recovery are greatly simplified without the need for techniques such as advance resource reservation or shared block devices. Upon this infrastructure, we developed a web portal composed of a bespoke collection of web-based visualization tools to help bring out relationships or patterns within the data. The portal was designed for use without any programming prerequisites by stakeholders from different backgrounds such as scientists, policy makers, local communities, and the general public. The development of the portal was carried out using an iterative behaviour-driven approach. We have developed six distinct storyboards to determine the requirements of different users. From these, we identified two storyboards to implement during the pilot phase. The first explores flooding at a local catchment scale for farmers and the public. We simulate hydrological interactions to determine where saturated land-surface areas develop. Model parameter values resembling catchment characteristics could be specified either explicitly (for domain specialists) or indirectly using one of several predefined land use scenarios (for less familiar audiences). The second storyboard investigates the diffuse of agricultural pollution at a national level, with regulators as users. We study the flux of Nitrogen and Phosphorus from land to rivers and coastal regions at various scales of drainage and reporting units. This is particularly useful to uncover the impact of existing policy instruments or risk from future environmental changes on the levels of N and P flux.

  2. Community Exposure to Lahar Hazards from Mount Rainier, Washington

    USGS Publications Warehouse

    Wood, Nathan J.; Soulard, Christopher E.

    2009-01-01

    Geologic evidence of past events and inundation modeling of potential events suggest that lahars associated with Mount Rainier, Washington, are significant threats to downstream development. To mitigate potential impacts of future lahars and educate at-risk populations, officials need to understand how communities are vulnerable to these fast-moving debris flows and which individuals and communities may need assistance in preparing for and responding to an event. To support local risk-reduction planning for future Mount Rainier lahars, this study documents the variations among communities in King, Lewis, Pierce, and Thurston Counties in the amount and types of developed land, human populations, economic assets, and critical facilities in a lahar-hazard zone. The lahar-hazard zone in this study is based on the behavior of the Electron Mudflow, a lahar that traveled along the Puyallup River approximately 500 years ago and was due to a slope failure on the west flank of Mount Rainier. This lahar-hazard zone contains 78,049 residents, of which 11 percent are more than 65 years in age, 21 percent do not live in cities or unincorporated towns, and 39 percent of the households are renter occupied. The lahar-hazard zone contains 59,678 employees (4 percent of the four-county labor force) at 3,890 businesses that generate $16 billion in annual sales (4 and 7 percent, respectively, of totals in the four-county area) and tax parcels with a combined total value of $8.8 billion (2 percent of the study-area total). Employees in the lahar-hazard zone are primarily in businesses related to manufacturing, retail trade, transportation and warehousing, wholesale trade, and construction. Key road and rail corridors for the region are in the lahar-hazard zone, which could result in significant indirect economic losses for businesses that rely on these networks, such as the Port of Tacoma. Although occupancy values are not known for each site, the lahar-hazard zone contains numerous dependent-population facilities (for example, schools and child day-care centers), public venues (for example, religious organizations and hotels), and critical facilities (for example, police and fire stations). The lahar-hazard zone also includes high-volume tourist sites, such as Mount Rainier National Park and the Puyallup Fairgrounds. Community exposure to lahars associated with Mount Rainier varies considerably among 27 communities and four counties - some may experience great losses that reflect only a small portion of their community and others may experience relatively small losses that devastate them. Among 27 communities, the City of Puyallup has the highest number of people and assets in the lahar-hazard zone, whereas the communities of Carbonado, Fife, Orting, and Sumner have the highest percentages of people and assets in this zone. Based on a composite index, the cities of Puyallup, Sumner, and Fife have the highest combinations of the number and percentage of people and assets in lahar-prone areas.

  3. Integrated modeling of agricultural scenarios (IMAS) to support pesticide action plans: the case of the Coulonge drinking water catchment area (SW France).

    PubMed

    Vernier, Françoise; Leccia-Phelpin, Odile; Lescot, Jean-Marie; Minette, Sébastien; Miralles, André; Barberis, Delphine; Scordia, Charlotte; Kuentz-Simonet, Vanessa; Tonneau, Jean-Philippe

    2017-03-01

    Non-point source pollution is a cause of major concern within the European Union. This is reflected in increasing public and political focus on a more sustainable use of pesticides, as well as a reduction in diffuse pollution. Climate change will likely to lead to an even more intensive use of pesticides in the future, affecting agriculture in many ways. At the same time, the Water Framework Directive (WFD) and associated EU policies called for a "good" ecological and chemical status to be achieved for water bodies by the end of 2015, currently delayed to 2021-2027 due to a lack of efficiency in policies and timescale of resilience for hydrosystems, especially groundwater systems. Water managers need appropriate and user-friendly tools to design agro-environmental policies. These tools should help them to evaluate the potential impacts of mitigation measures on water resources, more clearly define protected areas, and more efficiently distribute financial incentives to farmers who agree to implement alternative practices. At present, a number of reports point out that water managers do not use appropriate information from monitoring or models to make decisions and set environmental action plans. In this paper, we propose an integrated and collaborative approach to analyzing changes in land use, farming systems, and practices and to assess their effects on agricultural pressure and pesticide transfers to waters. The integrated modeling of agricultural scenario (IMAS) framework draws on a range of data and expert knowledge available within areas where a pesticide action plan can be defined to restore the water quality, French "Grenelle law" catchment areas, French Water Development and Management Plan areas, etc. A so-called "reference scenario" represents the actual soil occupation and pesticide-spraying practices used in both conventional and organic farming. A number of alternative scenarios are then defined in cooperation with stakeholders, including socio-economic conditions for developing alternative agricultural systems or targeting mitigation measures. Our integrated assessment of these scenarios combines the calculation of spatialized environmental indicators with integrated bio-economic modeling. The latter is achieved by a combined use of Soil and Water Assessment Tool (SWAT) modeling with our own purpose-built land use generator module (Generator of Land Use version 2 (GenLU2)) and an economic model developed using General Algebraic Modeling System (GAMS) for cost-effectiveness assessment. This integrated approach is applied to two embedded catchment areas (total area of 360,000 ha) within the Charente river basin (SW France). Our results show that it is possible to differentiate scenarios based on their effectiveness, represented by either evolution of pressure (agro-environmental indicators) or transport into waters (pesticide concentrations). By analyzing the implementation costs borne by farmers, it is possible to identify the most cost-effective scenarios at sub-basin and other aggregated levels (WFD hydrological entities, sensitive areas). Relevant results and indicators are fed into a specifically designed database. Data warehousing is used to provide analyses and outputs at all thematic, temporal, or spatial aggregated levels, defined by the stakeholders (type of crops, herbicides, WFD areas, years), using Spatial On-Line Analytical Processing (SOLAP) tools. The aim of this approach is to allow public policy makers to make more informed and reasoned decisions when managing sensitive areas and/or implementing mitigation measures.

  4. Sharing information among existing data sources

    NASA Astrophysics Data System (ADS)

    Ashley, W. R., III

    1999-01-01

    The sharing of information between law enforcement agencies is a premise for the success of all jurisdictions. A wealth of information resides in both the databases and infrastructures of local, state, and regional agencies. However, this information is often not available to the law enforcement professionals who require it. When the information is, available, individual investigators must not only know that it exists, but where it resides, and how to retrieve it. In many cases, these types of cross-jurisdictional communications are limited to personal relationships that result from telephone calls, faxes, and in some cases, e-mail. As criminal elements become more sophisticated and distributed, law enforcement agencies must begin to develop infrastructures and common sharing mechanisms that address a constantly evolving criminal threat. Historically, criminals have taken advantage of the lack of communication between law enforcement agencies. Examples of this are evident in the search for stolen property and monetary dealings. Pawned property, cash transactions, and failure to supply child support are three common cross- jurisdictional crimes that could be better enforced by strengthening the lines of communication. Criminal behavior demonstrates that it is easier to profit from their actions by dealing in separate jurisdictions. For example, stolen property is sold outside of the jurisdiction of its origin. In most cases, simply traveling a short distance to the adjoining county or municipality is sufficient to ensure that apprehension of the criminal or seizure of the stolen property is highly unlikely. In addition to the traditional burglar, fugitives often sell or pawn property to finance their continued evasion from the law. Sharing of information in a rapid manner would increase the ability of law enforcement personnel to track and capture fugitives, as well as criminals. In an example to combat this threat, the State of Florida recently acted on the need to share crucial investigative information across jurisdictional bounds by establishing a communications infrastructure for all of its law enforcement jurisdictions. The Criminal Justice Network (CJ-Net) is a statewide TCP/IP network, dedicated to the sharing of law enforcement information. CJ-Net is managed and maintained by the Florida Department of Law Enforcement (FDLE) and provides open access and privileges to any criminal justice agency, including the state court and penitentiary systems. In addition to Florida, other states, such as North Carolina, are also beginning to implement common protocol communication infrastructures and architectures in order to link local jurisdictions together throughout the state. The law enforcement domain in an optimum situation for information-sharing technologies. Communication infrastructures are continually established, and as such, action is required to effectively use these networks to their full potential. Information technologies that are best suited for the law enforcement domain, must be evaluated and implemented in a cost-effective manner. Unlike the Defense Department and other large federal agencies, individual jurisdictions at both the local and state level cannot afford to expend limited resources on research and development of prototype systems. Therefore, we must identify enabling technologies that have matured in related domains and transition them into law enforcement at a minimum cost. Crucial to this measure, is the selection of the appropriate levels of information-sharing technologies to be inserted. Information-sharing technologies that are unproven or have extensive recurring costs are not suitable for this domain. Information-sharing technologies traditionally exist between two distinct polar bounds: the data warehousing approach and mediation across distributed heterogeneous data sources. These two ends of the spectrum represent extremely different philosophies in accomplishing the same goal. In the following sections of this paper, discussions of information-sharing mechanisms will be addressed and the effectiveness of each is examined for the law enforcement domain. In each case, it is the opinion of the author as to which approach would lend itself to the most appropriate solution to the problem of effectively sharing criminal justice information.

  5. Community exposure to tsunami hazards in California

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie; Peters, Jeff

    2013-01-01

    Evidence of past events and modeling of potential events suggest that tsunamis are significant threats to low-lying communities on the California coast. To reduce potential impacts of future tsunamis, officials need to understand how communities are vulnerable to tsunamis and where targeted outreach, preparedness, and mitigation efforts may be warranted. Although a maximum tsunami-inundation zone based on multiple sources has been developed for the California coast, the populations and businesses in this zone have not been documented in a comprehensive way. To support tsunami preparedness and risk-reduction planning in California, this study documents the variations among coastal communities in the amounts, types, and percentages of developed land, human populations, and businesses in the maximum tsunami-inundation zone. The tsunami-inundation zone includes land in 94 incorporated cities, 83 unincorporated communities, and 20 counties on the California coast. According to 2010 U.S. Census Bureau data, this tsunami-inundation zone contains 267,347 residents (1 percent of the 20-county resident population), of which 13 percent identify themselves as Hispanic or Latino, 14 percent identify themselves as Asian, 16 percent are more than 65 years in age, 12 percent live in unincorporated areas, and 51 percent of the households are renter occupied. Demographic attributes related to age, race, ethnicity, and household status of residents in tsunami-prone areas demonstrate substantial range among communities that exceed these regional averages. The tsunami-inundation zone in several communities also has high numbers of residents in institutionalized and noninstitutionalized group quarters (for example, correctional facilities and military housing, respectively). Communities with relatively high values in the various demographic categories are identified throughout the report. The tsunami-inundation zone contains significant nonresidential populations based on 2011 economic data from Infogroup (2011), including 168,565 employees (2 percent of the 20-county labor force) at 15,335 businesses that generate approximately $30 billion in annual sales. Although the regional percentage of at-risk employees is low, certain communities, such as Belvedere, Alameda, and Crescent City, have high percentages of their local workforce in the tsunami-inundation zone. Employees in the tsunami-inundation zone are primarily in businesses associated with tourism (for example, accommodations, food services, and retail trade) and shipping (for example, transportation and warehousing, manufacturing, and wholesale trade), although the dominance of these sectors varies substantially among the 94 cities. Although the number of occupants is not known for each site, the tsunami-inundation zone contains numerous dependent-population facilities, such as schools and child daycare centers, which may have individuals with limited mobility. The tsunami-inundation zone includes a substantial number of facilities that provide community services, such as banks, religious organizations, and grocery stores, where local residents may be unaware of evacuation procedures if previous awareness efforts focused on home preparedness. There are also numerous recreational areas in the tsunami-inundation zone, such as amusement parks, marinas, city and county beaches, and State and national parks, which attract visitors who may not be aware of tsunami hazards or evacuation procedures. During peak summer months, estimated daily attendance at city and county beaches can be approximately six times larger than the total number of residents in the tsunami-inundation zone. Community exposure to tsunamis in California varies considerably—some communities may experience great losses that reflect only a small part of their community and others may experience relatively small losses that devastate them. Among 94 incorporated communities and the remaining unincorporated areas of the 20 coastal counties, the communities of Alameda, Oakland, Long Beach, Los Angeles, Huntington Beach, and San Diego have the highest number of people and businesses in the tsunami-inundation zone. The communities of Belvedere, Alameda, Crescent City, Emeryville, Seal Beach, and Sausalito have the highest percentages of people and businesses in this zone. On the basis of a composite index, the cities of Alameda, Belvedere, Crescent City, Emeryville, Oakland, and Long Beach have the highest combinations of the number and percentage of people and businesses in tsunami-prone areas.

Top