Sample records for intergral experiments databases

  1. State Experience Intergrating Pollution Prevention Into Permits

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  2. Prototyping Visual Database Interface by Object-Oriented Language

    DTIC Science & Technology

    1988-06-01

    approach is to use object-oriented programming. Object-oriented languages are characterized by three criteria [Ref. 4:p. 1.2.1]: - encapsulation of...made it a sub-class of our DMWindow.Cls, which is discussed later in this chapter. This extension to the application had to be intergrated with our... abnormal behaviors similar to Korth’s discussion of pitfalls in relational database designing. Even extensions like GEM [Ref. 8] that are powerful and

  3. Simulation of Organic Magnetic Resonance Force Microscopy Experiments

    DTIC Science & Technology

    2006-12-01

    Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof. Destroy this report when...doubled. In both the 2-D and 3-D cases, we do not sum over a finite spin system but integrate over a spin density. In 3-D the intergral is ∂Fx ∂x = − V...k To determine ∆fc given by equation 3, the intergral in equation 2 must be performed. The integral over all sensitive slices is determined with an

  4. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  5. Acoustic Emission Weld Monitor System. Data Acquisition and Investigation

    DTIC Science & Technology

    1979-10-01

    improved weld intergrity by allowing repairs to be performed on a pass by pass basis as the flaws occur rather than after the completion of a heavy...effect on weld intergrity . Flaw confirmation was primarily accomplished through the use of radio- graphic inspection. Positive confirmation of porosity...Figures 14-21, the weld is represented by the horizontal dashed line. Transducer locations, derived from calibration files, are indicated by verti

  6. Budgetary and Programmatic Fluctuations during the System Development and Demonstration Phase: A Case Study of the Marine Corps H-1 Upgrade Program

    DTIC Science & Technology

    2007-12-01

    impact of economic change might include a closing factory, market manipulation, the signing of international trade 17 treaties, or the global...Refinement System Intergration System Demonstration Concept Decision BA C LRIP Full-Rate Production & Deployment System Development and Demonstration...BLOCK III Concept Exploration Component Advanced Development Concept and Technology Development System Intergration System Demonstration Decision Review

  7. Correlation of Experimental and Theoretical Steady-State Spinning Motion for a Current Fighter Airplane Using Rotation-Balance Aerodynamic Data

    DTIC Science & Technology

    1978-07-01

    were input into the computer program. The program was numerically intergrated with time by using a fourth-order Runge-Kutta integration algorithm with...equations of motion are numerically intergrated to provide time histories of the aircraft spinning motion. A.2 EQUATIONS DEFINING THE FORCE AND MOMENT...by Cy or Cn. 50 AE DC-TR-77-126 A . 4 where EQUATIONS FOR TRANSFERRING AERODYNAMIC DATA INPUTS TO THE PROPER HORIZONTAL CENTER OF GRAVITY

  8. Destroyer Engineered Operating Cycle (DDEOC), System Maintenance Analysis DDG-37 Class, Salt Water Circulating System SMA 37-106-256, Review of Experience

    DTIC Science & Technology

    1978-07-01

    horizontally mounted, single-end suction, single- stage centrifugal pumps. The rotating elements are mounted on the shaft of the driving motor, and the pump...annual open-and-inspect requirement for MIP E-17/296-21, MRC 21 A14V A. Industrial Facility Improvements -- None IMA Improvements -- None Intergrated ...Circulating Pump, Warren Pumps, Inc., NAVSHIPS 347-3146, January 1959. 4. Technical Manual - Horizontal Close-Co!;pled Pumps Sea (Salt) Water

  9. Mobile radio interferometric geodetic systems

    NASA Technical Reports Server (NTRS)

    Macdoran, P. F.; Niell, A. E.; Ong, K. M.; Resch, G. M.; Morabito, D. D.; Claflin, E. S.; Lockhart, T. G.

    1978-01-01

    Operation of the Astronomical Radio Interferometric Earth Surveying (ARIES) in a proof of concept mode is discussed. Accuracy demonstrations over a short baseline, a 180 km baseline, and a 380 km baseline are documented. Use of ARIES in the Sea Slope Experiment of the National Geodetic Survey to study the apparent differences between oceanographic and geodetic leveling determinations of the sea surface along the Pacific Coast is described. Intergration of the NAVSTAR Global Positioning System and a concept called SERIES (Satellite Emission Radio Interferometric Earth Surveying) is briefly reviewed.

  10. Halogen occultation experiment intergrated test plan

    NASA Technical Reports Server (NTRS)

    Mauldin, L. E., III; Butterfield, A. J.

    1986-01-01

    The test program plan is presented for the Halogen Occultation Experiment (HALOE) instrument, which is being developed in-house at the Langley Research Center for the Upper Atmosphere Research Satellite (UARS). This comprehensive test program was developed to demonstrate that the HALOE instrument meets its performance requirements and maintains integrity through UARS flight environments. Each component, subsystem, and system level test is described in sufficient detail to allow development of the necessary test setups and test procedures. Additionally, the management system for implementing this test program is given. The HALOE instrument is a gas correlation radiometer that measures vertical distribution of eight upper atmospheric constituents: O3, HC1, HF, NO, CH4, H2O, NO2, and CO2.

  11. ARC-2010-ACD10-0052-035

    NASA Image and Video Library

    2010-03-20

    For Inspiration and Recognition of Science and Technology; FIRST Robotics Competition 2010 Silicon Valley Regional held at San Jose State University, San Jose, California Evolution, School for Intergrated Academics and Technology Team 1834

  12. A Capstone Experience in Physics

    NASA Astrophysics Data System (ADS)

    Ba, Jean-Claude; Lott, Trina

    1997-04-01

    This is an intergrated science course required for all AS/AA degree seeking students. It includes; ethical issues in science, the scientific method and interpretation of scientific results. This paper will present the work done by the only student enrolled in the course Autumn Quarter 1996. This course is in its 2 nd year at Columbus State Community College and may open the door to the development of more programs/courses that will introduce students from two-year Colleges to the different steps of a research project of a research project. In the future such projects could be completed in a local company as part of an internship.

  13. Superpave in-situ stress/strain investigation--phase II : vol. I, summary report.

    DOT National Transportation Integrated Search

    2009-05-01

    The characterization of materials is an intergral part of the overall effort to validate the Superpave system and to calibrate the performance prdeictionmodels for the environmental conditions observed in the Commonwealth of Pennsylvania.

  14. Superpave in-situ stress/strain investigation--phase II : vol. II, materials characterization.

    DOT National Transportation Integrated Search

    2009-05-01

    The characterization of materials is an intergral part of the overall effort to validate the Superpave system and to calibrate the performance prdeictionmodels for the environmental conditions observed in the Commonwealth of Pennsylvania.

  15. Superpave in-situ stress/strain investigation--phase II : vol. IV, mechanistic analysis and implementation.

    DOT National Transportation Integrated Search

    2009-05-01

    The characterization of materials is an intergral part of the overall effort to validate the Superpave system and to calibrate the performance prdeictionmodels for the environmental conditions observed in the Commonwealth of Pennsylvania.

  16. Superpave in-situ stress/strain investigation--phase II : vol. III, field data collection and summary.

    DOT National Transportation Integrated Search

    2009-05-01

    The characterization of materials is an intergral part of the overall effort to validate the Superpave system and to calibrate the performance prdeictionmodels for the environmental conditions observed in the Commonwealth of Pennsylvania.

  17. Repetitive Series Interrupter II.

    DTIC Science & Technology

    1977-07-01

    nated by other authorized documents. The citation of trade names and names of manufacturers is this report is not to be construed as official... intergrating inductor Magnet circuit load resistance Pulse-forming network load resistance Fault network load resistance Time delay between TUT fire and

  18. Current Capabilities and Planned Enhancements of SUSTAIN

    EPA Science Inventory

    Efforts have been under way by the U.S. Environmental Protection Agency (EPA) since 2003 to develop a decision-support system for placement of BMPs at strategic locations in urban watersheds. This system is call the System for Urban Stormwater Treatment and Analysis INtergration...

  19. Role of Hsp90 in Androgen-Refractory Prostate Cancer

    DTIC Science & Technology

    2010-03-01

    designed siRNA sequence using Intergrated DNA Technologies RNAi online software tool (IDT, Coralville, IA). The sequence of siRNA specific for...proliferation and production of prostate-specific antigen in androgen-sensitive pros- tatic cancer cells, LNCaP, by dihydrotestosterone. Endocrinology 136

  20. Counterinsurgency in Vietnam, Afghanistan, and Iraq: A Critical Analysis

    DTIC Science & Technology

    2014-05-01

    took advantage of the corruption and inefficiency of the government to recruit the proletariat for membership in the trade unions.18...which might end with disappointing results. Intergrity is an issue when dealing with the large quantities of emergency aid and funding available to

  1. Hardware Algorithm Implementation for Mission Specific Processing

    DTIC Science & Technology

    2008-03-01

    knowledge about the VLSI technology and understands VHDL, scripting, and intergrating the script in Cadencersoftware pro- gram or Modelsimr. The main...possible to have a trade off between parallel and serial logic design for the circuit. Power can be saved by using parallization, pipelining, or a

  2. Sun Tzu: Theorist for the Twenty-First Century

    DTIC Science & Technology

    2010-03-01

    instructing its Senior Leaders in the productiveness of this Strategic Thinking Model and ensure that future leaders are given the appropriate...the following suggestion: (1) Continue to intergrate Sun Tzu’s noteworthy strategic theories in today’s campaign plans to win the conflicts against

  3. Deep Diving Cetacean Behavioral Response Study MED 09

    DTIC Science & Technology

    2009-09-30

    distribution may be affected by anthropogenic noise. The role of the SPAWAR Systems Center (SSC) Pacific team was to intergrate the interdisiplinary...over 100 hours of data was collected on production sonobuoys for post test ambient noise data. CTD casts were taken at 56 sites, collecting

  4. Intergrating in Vitro and In Silico Approaches to Assess Inter-individual Toxicokinetic Variability

    EPA Science Inventory

    This educational talk provided an introduction to what is currently known to contribute to differences in how various populations and life stages metabolize chemicals to which they are exposed. These differences will impact how different populations may be affected following chem...

  5. Insights and challenges to Intergrating data from diverse ecological networks

    USDA-ARS?s Scientific Manuscript database

    Many of the most dramatic and surprising effects of global change occur across large spatial extents, from regions to continents, that impact multiple ecosystem types across a range of interacting spatial and temporal scales. The ability of ecologists and interdisciplinary scientists to understand a...

  6. 14 CFR 27.1189 - Shutoff means.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— (1) Lines, fittings, and components forming an intergral part of an engine; (2) For oil systems for which all components of the system, including oil tanks, are fireproof or located in areas not subject to engine fire conditions; and (3) For reciprocating engine installations only, engine oil system...

  7. SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.

    PubMed

    Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S

    2012-06-01

    To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.

  8. INTERGRATING SOURCE WATER PROTECTION AND DRINKING WATER TREATMENT: U.S. ENVIRONMENTAL PROTECTION AGENCY'S WATER SUPPLY AND WATER RESOURCES DIVISION

    EPA Science Inventory

    The U.S. Environmental Protection Agency's (EPA) Water Supply and Water Resources Division (WSWRD) is an internationally recognized water research organization established to assist in responding to public health concerns related to drinking water supplies. WSWRD has evolved from...

  9. Integration of a Miniaturized Conductivity Sensor into an Animal-Borne Instrument

    DTIC Science & Technology

    2013-09-30

    inductive sensors. However, there is a trade -off between size and accuracy. Decreasing size resuls in a decreased accuracy. In addition, by...modified for easy integration into the existing SRDL. The CT package will then be intergrated into the SRDL tested in the lab. After the successful

  10. Fiber Optic Safeguards Sealing System

    DTIC Science & Technology

    1978-01-01

    8217 or trade names does not constitute an official indorsement or approval of the use thereof. Destroy this report when it is no longer needed. Do not...an intergrity check of a seal than to photograph the seal’s fingerprints and to match positive/negative overlays. The seal identification time and

  11. Accessible intergration of agriculture, groundwater, and economic models using the Open Modeling Interface (Open MI): methodology and initial results

    USDA-ARS?s Scientific Manuscript database

    Policy for water resources impacts not only hydrological processes, but the closely intertwined economic and social processes dependent on them. Understanding these process interactions across domains is an important step in establishing effective and sustainable policy. Multidisciplinary integrated...

  12. A Comparison of the AFGL Flash, Draper Dart and AWS Haze Models with the Rand Wetta Model for Calculating Atmospheric Contrast Reduction.

    DTIC Science & Technology

    1982-03-01

    52 ILLUSTRATIONS Figure I Horizontal Visibility Profiles for Stair-Step and Exponential Extinction Coefficient...background reflectances. These values were then numerically intergrated (via a combination of Simpson’s and Newton’s 3/8th rules) and compared with the

  13. Air Force Roadmap 2006-2025

    DTIC Science & Technology

    2006-06-01

    systems. Cyberspace is the electronic medium of net-centric operations, communications systems, and computers, in which horizontal integration and online...will be interoperable, more robust, responsive, and able to support faster spacecraft initialization times. This Intergrated Satellite Control... horizontally and vertically integrated information through machine-to-machine conversations enabled by a peer-based network of sensors, command

  14. The Impact of Turbulent Fluctuations on Light Propagation in a Controlled Environment

    DTIC Science & Technology

    2014-07-01

    Turbulent flows are an intergral part of the natural environment. In the ocean, the mixing that accompanies turbulent flows is an important part of the...the vertical direction and | if Q for the horizontal directions. 2.1.2 Temperature Dissipation rate - TD For the estimation of TD rates from the

  15. EFFECTS OF APPLIANCE TYPE AND OPERATING VARIABLES ON WOODSTOVE EMISSIONS: VOLUME I. REPORT AND APPENDICIES A-C.

    EPA Science Inventory

    The report gives results of a project, in support of the intergrated Air Canver Project (IACP) to provide data on the specific effects of appliance type and operating variales on woodstove emissions. Samples of particulate material and volatile organic compounds (VOCs) were colle...

  16. Intergrated palmer amaranth management in glufosinate-resistant cotton: I. Soil-inversion, high-residue cover crops and herbicide regimes

    USDA-ARS?s Scientific Manuscript database

    Amaranthus control in cotton can be difficult with the loss of glyphosate efficacy, especially in conservation tillage cropping systems. Research was conduction from 2006 to 2008 at EV Smith Research Center, Shorter, Alabama, to determine the level of glyphosate susceptible amaranthus control provi...

  17. Performance of portland limestone cements : cements designed to be more sustainable that include up to 15% limestone addition.

    DOT National Transportation Integrated Search

    2013-11-01

    In 2009, ASTM and AASHTO permitted the use of up to 5% interground limestone in ordinary portland cement (OPC) as a part of ASTM : C150/AASHTO M85. When this project was initiated a new proposal was being discussed that would enable up to 15% intergr...

  18. The Challenges for Marketing Distance Education in Online Environment: An Intergrated Approach

    ERIC Educational Resources Information Center

    Demiray, Ugur, Ed.; Sever, N. Serdar, Ed.

    2009-01-01

    The education system of our times has transformed greatly due to enormous developments in the IT field, ease in access to online resources by the individuals and the teachers adopting new technologies in their instructional strategies, be it for course design, development or delivery. The field of Distance and Online Education is experiencing…

  19. The Solution of Large Time-Dependent Problems Using Reduced Coordinates.

    DTIC Science & Technology

    1987-06-01

    numerical intergration schemes for dynamic problems, the algorithm known as Newmark’s Method. The behavior of the Newmark scheme, as well as the basic...T’he horizontal displacements at the mid-height and the bottom of the buildin- are shown in f igure 4. 13. The solution history illustrated is for a

  20. WSTIAC Quarterly, Volume 7, Number 2. Naval Ship and Ship Systems Needs for Early 21st Century

    DTIC Science & Technology

    2007-01-01

    Radar Suite Navy Enterprise Warfare System Affordable Future Fleet 2 Intergrated Scalable Modular Open C4I Common Core B/L’s Command & Combatant Ship...discussed. System constraints, which force trade -offs in sensor design and in ultimate performance, are also covered. Time permitting, a projection of

  1. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    DTIC Science & Technology

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  2. Improving Emergency Medical Services (EMS) in the United States Through Improved and Centralized Federal Coordination

    DTIC Science & Technology

    2011-03-01

    guidance and actions towards that end service product . Also to be considered is that not only are the stakeholders independent in their needs for strategy...from http://www.firerescue1.com/fire- ems/articles/770081-Ex-DC-fire-chief-rerets- intergrating Doyle, J. (2008, April 13). San Francisco 911 misses

  3. Modeling Distributions of Non-Coherent Integration Sidelobes

    DTIC Science & Technology

    2010-03-01

    we can write 2 † ,n nC AX X (4.3) where the matrix A is an outer product of the real coefficients derived from the j and “ † ” denotes...c ie n t Desired C/A: In-phase only Intrf Intrf + Noise © The MITRE Corporation. All rights reserved 5. NON-COHERENT INTERGRATION MODELING

  4. Characterizing phenolformaldehyde adhesive cure chemistry within the wood cell wall

    Treesearch

    Daniel J. Yelle; John Ralph

    2016-01-01

    Adhesive bonding of wood using phenol-formaldehyde remains the industrial standard in wood product bond durability. Not only does this adhesive infiltrate the cell wall, it also is believed to form primary bonds with wood cell wall polymers, particularly guaiacyl lignin. However, the mechanism by which phenol-formaldehyde adhesive intergrally interacts and bonds to...

  5. Using an intergrated moisture index to assess forest composition and productivity. Chapter 11.

    Treesearch

    Matthew Peters; Louis R. Iverson; Anantha M. Prasad

    2010-01-01

    The 834,000-acre Wayne National Forest, Ohio's only national forest, lies in the rolling foothills of the Appalachians in the state's southeast. Congress established the forest boundary in 1934 to prioritize land acquisition and ownership of forest lands in need of restoration. The forest is composed of both central hardwoods,...

  6. A Zonal Approach for the Solution of Coupled Euler and Potential Solutions of Flows with Complex Geometries.

    DTIC Science & Technology

    1987-06-01

    obtained from: A simple numerical intergration scheme is employed to perform the integral in Equations (B2) and (86) along the dividing streamline. A 11 4...angle of attack was small, the dividing streamline remained almost horizontal in this case. Results of a higher angle of attack case, in which the mesh

  7. Separating Belligerent Populations: Mitigating Ethno-Sectarian Conflict

    DTIC Science & Technology

    2008-05-22

    Princeton University Press, 2003), 132-33. 6 The very idea of reconciliation, much less any return to a state of peaceful, intergrated coexistence...threaten the environment. Such conflicts also threaten international access to natural resources and the security of trade distribution infrastructure...criticize separation because it creates more economic and social problems. Lack of trade and economic opportunity in isolated ethnic enclaves results

  8. Multivariate Dynamical Modeling to Investigate Human Adaptation to Space Flight: Initial Concepts

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark; Mindock, Jennifer; Zeffiro, Tom; Krakauer, David; Paloski, William H.; Lumpkins, Sarah

    2014-01-01

    The array of physiological changes that occur when humans venture into space for long periods presents a challenge to future exploration. The changes are conventionally investigated independently, but a complete understanding of adaptation requires a conceptual basis founded in intergrative physiology, aided by appropriate mathematical modeling. NASA is in the early stages of developing such an approach.

  9. Formula Gives Better Contact-Resistance Values

    NASA Technical Reports Server (NTRS)

    Lieneweg, Udo; Hannaman, David J.

    1988-01-01

    Lateral currents in contact strips taken into account. Four-terminal test structures added to intergrated circuits to enable measurement of interfacial resistivities of contacts between thin conducting layers. Thin-film model simplified quasi-two-dimensional potential model that accounts adequately for complicated three-dimensional, nonuniform current densitites. Effects of nonuniformity caused by lateral current flow in strips summarized in equivalent resistance Rs and voltage Vs.

  10. Delft-FEWS:A Decision Making Platform to Intergrate Data, Model, Algorithm for Large-Scale River Basin Water Management

    NASA Astrophysics Data System (ADS)

    Yang, T.; Welles, E.

    2017-12-01

    In this paper, we introduce a flood forecasting and decision making platform, named Delft-FEWS, which has been developed over years at the Delft Hydraulics and now at Deltares. The philosophy of Delft-FEWS is to provide water managers and operators with an open shell tool, which allows the integratation of a variety of hydrological, hydraulics, river routing, and reservoir models with hydrometerological forecasts data. Delft-FEWS serves as an powerful tool for both basin-scale and national-scale water resources management. The essential novelty of Delft-FEWS is to change the flood forecasting and water resources management from a single model or agency centric paradigm to a intergrated framework, in which different model, data, algorithm and stakeholders are strongly linked together. The paper will start with the challenges in water resources managment, and the concept and philosophy of Delft-FEWS. Then, the details of data handling and linkages of Delft-FEWS with different hydrological, hydraulic, and reservoir models, etc. Last, several cases studies and applications of Delft-FEWS will be demonstrated, including the National Weather Service and the Bonneville Power Administration in USA, and a national application in the water board in the Netherland.

  11. Sex therapy and mastectomy.

    PubMed

    Witkin, M H

    1975-01-01

    Because the emotional trauma associated with a mastectomy exceeds the physical trauma, the recovery of the woman is greatly affected by the response of her husband or lover. Sex therapy, therefore, involves the couple. The approach described here is aimed at assisting the couple to confront and intergrate the mastectomy experience. The use of a prosthesis is discouraged during intercourse because it delays such confrontation; certain sex therapy exercises (body imagery and sensate focus) are usually recommended because they facilitate confrontation and acceptance. These, modified for the circumstances, are described. It is suggested that intercourse be attempted as early as possible, and that if physical weakness or psychological trepidation intervenes, the physical desire and caring of the husband be expressed nonetheless. The "professional" attitudes that psychotherapy is always indicated for mastectomy patients and that the proper role of the husband is matter-of-fact denial are rejected; emphasis is placed on the beneficial consequences of sharing of all emotions.

  12. New Dimensions in Microarchitecture Harnessing 3D Integration Technologies (BRIEFING CHARTS)

    DTIC Science & Technology

    2007-03-06

    Quad Core Bandwidth and Latency Boundaries General Purpose Processor Loads Latency limited Ba nd w id th li m ite dProcessor load trade -off between I...delay No= number of ckts at 1V do= ckt delay at 1V From “3D Intergration ” Special Topic Sessionl W. Haensch, ISSCC ‘07, 2/07 11 DARPA MTS March 6, 2007

  13. Environmental Acoustic Considerations for Passive Detection of Maritime Targets by Hydrophones in a Deep Ocean Trench

    DTIC Science & Technology

    2010-06-01

    Science and Technology. Available: http://cmst.curtin.edu.au/local/docs/ products / actup_v2_2l_installation_user_guide.pdf (accessed 2 June 2010...noisecurve112(:,6)); %% Intergrating Noise Level Trench A n2=0; Itot=0; phi_t=atan(D1/L1); m=1; while (phi(m,1)>phi_t) m=m+1; end

  14. ARC-2009-ACD09-0244-008

    NASA Image and Video Library

    2009-11-04

    A Nanosensor Device for Cellphone Intergration and Chemical Sensing Network. iPhone with sensor chip, data aquisition board and sampling jet.(Note 4-4-2012:High Sensitive, Low Power and Compact Nano Sensors for Trache Chemical Detection' is the winner of the Government Invention of the Year Award 2012 (winning inventors Jing Li and Myya Meyyappan, NASA/ARC, and Yijiang Lu, University of California Santa Cruz. )

  15. Guide to the Intergration of Selected Concepts of Economics into the History Curriculum of Fort Worth Country Day School.

    ERIC Educational Resources Information Center

    Dixon, Ford

    This guide will help teachers of grades 6-12 integrate economics concepts into history courses. The developers believe that the language and theories of economics are more understandable, germane, and pertinent in the context of a history curriculum. The seven basic economic concepts taught are: the law of demand, the law of supply, private…

  16. Toward Improved Maintenance Training Programs: The Potentials for Training and Aiding the Technician.

    DTIC Science & Technology

    1981-07-01

    conditional, fault-isolation approach of the con- Data Base Requirements tent expert, photographs of normal and abnormal symp- The content-expert may...59 THE AUTOMATED INTERGRATION OF TRAINING AND AIDING INFORMATION FOR THE OPERATOR/TECHNICIAN Dr. Douglas Towne...Subsystem approach devel- until this Third Biennial Conference oped by the Air Force in the 1960’s for us to call a meeting devoted to integrate Human

  17. Protective Socket For Integrated Circuits

    NASA Technical Reports Server (NTRS)

    Wilkinson, Chris; Henegar, Greg

    1988-01-01

    Socket for intergrated circuits (IC's) protects from excessive voltages and currents or from application of voltages and currents in wrong sequence during insertion or removal. Contains built-in switch that opens as IC removed, disconnecting leads from signals and power. Also protects other components on circuit board from transients produced by insertion and removal of IC. Makes unnecessary to turn off power to entire circuit board so other circuits on board continue to function.

  18. Advanced Techniques for Scene Analysis

    DTIC Science & Technology

    2010-06-01

    robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each

  19. Hexagonal Data Base Study.

    DTIC Science & Technology

    1983-09-01

    Army position unless so designated by other authorized documents. The citation in this report of trade names of commzercially available products does...in this report. Two aspects 9’ • of the concept appeared to have been less than optimal. One was the trade off between batch and real-time...suggested in section 4.1.2. Recode the new concept and compare the results with the present work. I b. Intergrate point and line data into the

  20. Airborne Navigation Remote Map Reader Evaluation.

    DTIC Science & Technology

    1986-03-01

    EVALUATION ( James C. Byrd Intergrated Controls/Displays Branch SAvionics Systems Division Directorate of Avionics Engineering SMarch 1986 Final Report...Resolution 15 3.2 Accuracy 15 3.3 Symbology 15 3.4 Video Standard 18 3.5 Simulator Control Box 18 3.6 Software 18 3.7 Display Performance 21 3.8 Reliability 24...can be selected depending on the detail required and will automatically be presented at his present position. .The French RMR uses a Flying Spot Scanner

  1. Aerospace Software Engineering for Advanced Systems Architectures (L’Ingenierie des Logiciels Pour les Architectures des Systemes Aerospatiaux)

    DTIC Science & Technology

    1993-11-01

    Eliezer N. Solomon Steve Sedrel Westinghouse Electronic Systems Group P.O. Box 746, MS 432, Baltimore, Maryland 21203-0746, USA SUMMARY The United States...subset of the Joint Intergrated Avionics NewAgentCollection which has four Working Group (JIAWG), Performance parameters: Acceptor, of type Task._D...Published Noember 1993 Distribution and Availability on Back Cover SAGARD-CP54 ADVISORY GROUP FOR AERSACE RESEARCH & DEVELOPMENT 7 RUE ANCELLE 92200

  2. A Study of the Ambulatory Care Quality Assurance Program at DeWitt Army Community Hospital, Fort Belvoir, Virginia

    DTIC Science & Technology

    1982-12-01

    34 Intergrated Approach Improves Quality Assurance, Risk Management Activities," Hospitals, (September 1, 1980), pp. 59-62. Rinaldi, Leena and Barbara...mode, etc.). (2) Trending as a method to determine abnormalities . (3) Tests of statistical significance (Chi-squared, T-Test, correlation). b. Develop a...dentist-, nurses, etc.), such as age, type of medical training and 7 IZ. degree, and practice of the physician.’ 0 The structural approach assumes that

  3. A Study of the Ambulatory Care Quality Assurance Program at DeWitt Army Community Hospital, Fort Belvoir, Virginia

    DTIC Science & Technology

    1982-08-01

    Orlinkoff, James E. and Gary B. Lanham. " Intergrated Approach Improves Quality Assurance, Risk Management Activities," Hospitals, (September 1,1 980...deviation, mode, etc.). (2) Trending as a method to determine abnormalities . (3) Tests of statistical significance, i.e., Chi-squared, T-Test, correlation...dentists, nurses, etc.), such as age, type of medical training and 7 degree, and practice of the physician. 1 0 The "structural" approach assumes that given

  4. A High-Performance Reconfigurable Fabric for Cognitive Information Processing

    DTIC Science & Technology

    2010-12-01

    receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select an input port. The...dual of a merge. It receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select...Computer-Aided Design of Intergrated Circuits and Systems, Vol. 26, No. 2, February 2007. [12] Cadence Design Systems. Clock Domain Crossing: Closing the

  5. The Department of Defense Statement on the Science and Technology Program by Mr. H. Mark Grove, Assistant Deputy Under Secretary of Defense for Research and Advanced Technology Before the Defense Subcommittee of the Committee on Appropriations of the United States House of Representatives, 97th Congress, Second Session,

    DTIC Science & Technology

    1982-06-16

    Technology Intergration (AFTI) Pro- gram, (2) a nonmetallic composite helicopter fuselage, and (3) a new initiative to develop Short Take-Off and...range of passive sonobuoys is being extended and the performance of a high gain extended life deployed horizontal line array is being investigated

  6. The Effects of a Geomagnetic Storm on Thermospheric Circulation.

    DTIC Science & Technology

    1987-01-01

    frequency. .*. p air density. olU 2 Pedersen and Hall conductivities. a P height intergrated Pedersen conductivity. horizontal viscous stress. * east...equations need to be ex- ,n~panded upon. The energy density is: (.2 1 + V2). I~i~iCPT +<V 2 . The horizontal viscous stress, including molecular and...with Z=0 at 80 km and Z=14.4 at 450 km for a total of 49 levels each 0.3 of a scale height apart. Also, the horizontal wind velocity, gas energy

  7. A Secure and Reliable High-Performance Field Programmable Gate Array for Information Processing

    DTIC Science & Technology

    2012-03-01

    receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select an input port. The input...dual of a merge. It receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select...Transactions on Computer-Aided Design of Intergrated Circuits and Systems, Vol. 26, No. 2, February 2007. [12] Cadence Design Systems, “Clock Domain

  8. Rotating shielded crane system

    DOEpatents

    Commander, John C.

    1988-01-01

    A rotating, radiation shielded crane system for use in a high radiation test cell, comprises a radiation shielding wall, a cylindrical ceiling made of radiation shielding material and a rotatable crane disposed above the ceiling. The ceiling rests on an annular ledge intergrally attached to the inner surface of the shielding wall. Removable plugs in the ceiling provide access for the crane from the top of the ceiling into the test cell. A seal is provided at the interface between the inner surface of the shielding wall and the ceiling.

  9. Use of Electromyogram Information to Improve Human Operator Performance.

    DTIC Science & Technology

    1979-12-01

    7 POWER ,a1000 ANGE EM , MIC VOLTS R1SE T IOSE( I PI 10 MIN I P SOUN C y o ng iii II Li ii LIIA~A Time Period Intergrator • Q700 Fig 2...experimental re- sults, and shows some of the different approaches that were used in analyzing the data. The chapter is in three parts. First, scores are fit...double vision, eye surgery, best corrected vision less than 20/20, abnormal depth per- ception, or decreased visual field? a. Yefi b. No 46 10. Have

  10. Rotor Dynamic Inflow Derivatives and Time Constants from Various Inflow Models.

    DTIC Science & Technology

    1980-12-01

    fore-and-aft rotor diameter for the case of horizontal flight. It i- possible to determine from the blade twist both the geometric and equivalent...17, the flat-wake theory represents a limiting case where all the vortices transferred to the slipstream of a rotor, moving horizontally at a...L44,4) 66- p E 40- R CE T~ 26* E R R 0 0 R -a,- ’ I 1 P I . . . I . . 6.0 0.1 0.2 0.3 0.4 0.5 INTERGRATION INCREMENT Figure 9. Effects of the

  11. IRIS Toxicological Review of Dichloromethane (Methylene ...

    EPA Pesticide Factsheets

    EPA has finalized the Toxicological Review of Dichloromethane (Methylene Chloride): In support of the Integrated Risk Information System (IRIS). Now final, this assessment may be used by EPA’s program and regional offices to inform decisions to protect human health. This document presents background information and justification for the Intergrated Risk Information System (IRIS) Summary of the hazard and dose-response assessment of dichloromethane. IRIS Summaries may include oral reference dose (RfD) and inhalation reference concentration (RfC) values for chronic and other exposure durations, and a carcinogencity assessment. Internet/NCEA web site

  12. Mapping continental-scale biomass burning and smoke palls from the space shuttle

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh; Helfert, Michael

    1992-01-01

    Space shuttle photographs have been used to map the areal extent of Amazonian smoke palls associated with biomass burning. Areas covered with smoke have increased from approximately 300,000 sq km to continental-size smoke palls of approximately 3,000,000 sq km. The smoke palls interpreted from the STS-48 data indicate that this phenomenon is persistent. Astronaut observations of such dynamic and vital environmental phenomena indicate the possibility of intergrating the earth observation capabilities of all space platforms in future modeling of the earth's dynamic processes.

  13. Portable chemical detection system with intergrated preconcentrator

    DOEpatents

    Baumann, Mark J.; Brusseau, Charles A.; Hannum, David W.; Linker, Kevin L.

    2005-12-27

    A portable system for the detection of chemical particles such as explosive residue utilizes a metal fiber substrate that may either be swiped over a subject or placed in a holder in a collection module which can shoot a jet of gas at the subject to dislodge residue, and then draw the air containing the residue into the substrate. The holder is then placed in a detection module, which resistively heats the substrate to evolve the particles, and provides a gas flow to move the particles to a miniature detector in the module.

  14. Water Leakage Diagnosis in Metro Tunnels by Intergration of Laser Point Cloud and Infrared Thermal Imaging

    NASA Astrophysics Data System (ADS)

    Yu, P.; Wu, H.; Liu, C.; Xu, Z.

    2018-04-01

    Diagnosis of water leakage in metro tunnels is of great significance to the metro tunnel construction and the safety of metro operation. A method that integrates laser scanning and infrared thermal imaging is proposed for the diagnosis of water leakage. The diagnosis of water leakage in this paper is mainly divided into two parts: extraction of water leakage geometry information and extraction of water leakage attribute information. Firstly, the suspected water leakage is obtained by threshold segmentation based on the point cloud of tunnel. And the real water leakage is obtained by the auxiliary interpretation of infrared thermal images. Then, the characteristic of isotherm outline is expressed by solving Centroid Distance Function to determine the type of water leakage. Similarly, the location of leakage silt and the direction of crack are calculated by finding coordinates of feature points on Centroid Distance Function. Finally, a metro tunnel part in Shanghai was selected as the case area to make experiment and the result shown that the proposed method in this paper can be used to diagnosis water leakage disease completely and accurately.

  15. A Study to Determine the Cost Advantage of Establishing an Internal Champus Partnership with Civilian Providers for the Delivery of Mental Health Services in the Catchment Area of the Naval Medical Clinic Annapolis, MD

    DTIC Science & Technology

    1992-06-12

    DoD Instruction 6010.12, it is the policy of DoD that the Partnership Program be utilized to intergrate civilian and military health care resources (2...care programs, such as PPO’s, as alternative approaches for delivering mental health services due to their cost containment potential (Trauner, 32...government must fill the role of both payer and broker. The CHAMPUS Partnership Program represents an innovative attempt at approaching a system of managed

  16. The Method of Multiple Spatial Planning Basic Map

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  17. Management of CAD/CAM information: Key to improved manufacturing productivity

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Brainin, J.

    1984-01-01

    A key element to improved industry productivity is effective management of CAD/CAM information. To stimulate advancements in this area, a joint NASA/Navy/industry project designated Intergrated Programs for Aerospace-Vehicle Design (IPAD) is underway with the goal of raising aerospace industry productivity through advancement of technology to integrate and manage information involved in the design and manufacturing process. The project complements traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program to advance CAM technology. IPAD research is guided by an Industry Technical Advisory Board (ITAB) composed of over 100 representatives from aerospace and computer companies.

  18. Intergration of system identification and robust controller designs for flexible structures in space

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Lew, Jiann-Shiun

    1990-01-01

    An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.

  19. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  20. Advanced Gas Turbine (AGT) powertrain system initial development report

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The powertrain consists of a single shaft regenerated gas turbine engine utilizing ceramic hot section components, coupled to a slit differential gearbox with an available variable stator torque converter and an available Ford intergral overdrive four-speed automatic transmission. Predicted fuel economy using gasoline fuel over the combined federal driving cycle (CFDC) is 15.3 km/1, which represents a 59% improvement over the spark-ignition-powered baseline vehicle. Using DF2 fuel, CFDC mileage estimates are 17.43 km/1. Zero to 96.6 km/hr acceleration time is 11.9 seconds with a four-second accleration distance of 21.0 m. The ceramic radial turbine rotor is discussed along with the control system for the powertrain.

  1. Intergrated Systems Biology Approach for Ovarian Cancer Biomarker Discovery — EDRN Public Portal

    Cancer.gov

    The overall objective is to validate serum protein markers for early diagnosis of ovarian cancer with the ultimate goal being to develop a multiparametric panel consisting of 2-4 novel markers with 10 known markers for phase 3 analysis. In phase 1, we will screen for markers able to pass a threshold of 98% specificity and 30% sensitivity in a cohort of 300 women. Markers that pass phase 1 validation will be investigated in a phase 2 PRoBE cohort with a 98% specificity and 70% sensitivity cut-off. Finally, markers that pass phase 2 validation will be evaluated in EDRN CVC laboratory specimens with a cut-off of > 98% specificity and 90% sensitivity.

  2. Book Reviews

    NASA Astrophysics Data System (ADS)

    Horner, Joseph L.

    1987-04-01

    Progress in the fields of integrated optics and fiber optics is continuing at a rapid pace. Recognizing this trend, the goal of the author is to provide an introductory textbook on time-harmonic electromagnetic theory, with an emphasis on optical rather than microwave technologies. The book is appropriate for an upper-level undergraduate or graduate course. Each chapter includes examples of problems. The book focuses on several areas of prime importance to intergrated optics. These include dielectric waveguide analysis, couple-mode thoery, Bragg scattering, and prism coupling There is very little coverage of active components such as electro-optic modulators and switches. The author assumes the reader has a working knowledge of vector calculus and is familiar with Maxwell's equations.

  3. An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function

    NASA Technical Reports Server (NTRS)

    Liang, Shunlin; Strahler, Alan H.

    1992-01-01

    Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.

  4. Interactive Teaching as a Recruitment and Training Tool for K-12 Science Teachers

    NASA Astrophysics Data System (ADS)

    Rosenberg, J. L.

    2004-12-01

    The Science, Technology, Engineering, and Mathematics Teacher Preparation (STEMTP) program at the University of Colorado has been designed to recruit and train prospective K-12 science teachers while improving student learning through interactive teaching. The program has four key goals: (1) recruit undergraduate students into K-12 science education, (2) provide these prospective teachers with hands-on experience in an interactive teaching pedagogy, (3) create an intergrated program designed to support (educationally, socially, and financially) and engage these prospective science teachers up until they obtain liscensure and/or their masters degree in education, and (4) improve student learning in large introductory science classes. Currently there are 31 students involved in the program and a total of 72 students have been involved in the year and a half it has been in existence. I will discuss the design of the STEMTP program, the success in recruiting K-12 science teachers, and the affect on student learning in a large lecture class of implementing interactive learning pedagogies by involving these prospective K-12 science teachers. J. L. Rosenberg would like to acknowledge the NSF Astronomy and Astrophysics Fellowship for support for this work. The course transformation project is also supported by grants from the National Science Foundation.

  5. Methodology evaluation: Effects of independent verification and intergration on one class of application

    NASA Technical Reports Server (NTRS)

    Page, J.

    1981-01-01

    The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.

  6. An evaluation of the first four LANDSAT-D thematic mapper reflective sensors for monitoring vegetation: A comparison with other satellite sensor systems

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.

    1978-01-01

    The first four LANDSAT-D thematic mapper sensors were evaluated and compared to: the return beam vidicon (RBV) and multispectral scanners (MSS) sensors from LANDSATS 1, 2, and 3; Colvocoresses' proposed 'operational LANDSAT' three band system; and the French SPOT three band system using simulation/intergration techniques and in situ collected spectral reflectance data. Sensors were evaluated by their ability to discriminate vegetation biomass, chlorophyll concentration, and leaf water content. The thematic mapper and SPOT bands were found to be superior in a spectral resolution context to the other three sensor systems for vegetational applications. Significant improvements are expected for most vegetational analyses from LANDSAT-D thematic mapper and SPOT imagery over MSS and RBV imagery.

  7. Data acquisition and path selection decision making for an autonomous roving vehicle. [laser pointing control system for vehicle guidance

    NASA Technical Reports Server (NTRS)

    Shen, C. N.; YERAZUNIS

    1979-01-01

    The feasibility of using range/pointing angle data such as might be obtained by a laser rangefinder for the purpose of terrain evaluation in the 10-40 meter range on which to base the guidance of an autonomous rover was investigated. The decision procedure of the rapid estimation scheme for the detection of discrete obstacles has been modified to reinforce the detection ability. With the introduction of the logarithmic scanning scheme and obstacle identification scheme, previously developed algorithms are combined to demonstrate the overall performance of the intergrated route designation system using laser rangefinder. In an attempt to cover a greater range, 30 m to 100 mm, the problem estimating gradients in the presence of positioning angle noise at middle range is investigated.

  8. Development of multidisciplinary nanotechnology undergraduate education program at the University of Rochester Integrated Nanosystems Center

    NASA Astrophysics Data System (ADS)

    Lukishova, Svetlana G.; Bigelow, Nicholas P.; D'Alessandris, Paul D.

    2017-08-01

    Supported by the U.S. National Science Foundation educational grant, a coherent educational program at the University of Rochester (UR) in nanoscience and nanoengineering, based on the Institute of Optics and Intergrated Nanosystems Center resources was created. The main achievements of this program are (1) developing curriculum and offering the Certificate for Nanoscience and Nanoengineering program (15 students were awarded the Certificate and approximately 10 other students are working in this direction), (2) creating a reproducible model of collaboration in nanotechnology between a university with state-of-the-art, expensive experimental facilities, and a nearby, two-year community college (CC) with participation of a local Monroe Community College (MCC). 52 MCC students carried out two labs at the UR on the atomic force microscopy and a photolithography at a clean room; (3) developing reproducible hand-on experiments on nanophotonics ("mini-labs"), learning materials and pedagogical methods to educate students with diverse backgrounds, including freshmen and non-STEM-major CC students. These minilabs on nanophotonics were also introduced in some Institute of Optics classes. For the Certificate program UR students must take three courses: Nanometrology Laboratory (a new course) and two other selective courses from the list of several. Students also should carry out a one-semester research or a design project in the field of nanoscience and nanoengineering.

  9. Sensors, Volume 1, Fundamentals and General Aspects

    NASA Astrophysics Data System (ADS)

    Grandke, Thomas; Ko, Wen H.

    1996-12-01

    'Sensors' is the first self-contained series to deal with the whole area of sensors. It describes general aspects, technical and physical fundamentals, construction, function, applications and developments of the various types of sensors. This volume deals with the fundamentals and common principles of sensors and covers the wide areas of principles, technologies, signal processing, and applications. Contents include: Sensor Fundamentals, e.g. Sensor Parameters, Modeling, Design and Packaging; Basic Sensor Technologies, e.g. Thin and Thick Films, Integrated Magnetic Sensors, Optical Fibres and Intergrated Optics, Ceramics and Oxides; Sensor Interfaces, e.g. Signal Processing, Multisensor Signal Processing, Smart Sensors, Interface Systems; Sensor Applications, e.g. Automotive: On-board Sensors, Traffic Surveillance and Control, Home Appliances, Environmental Monitoring, etc. This volume is an indispensable reference work and text book for both specialits and newcomers, researchers and developers.

  10. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  11. The Unified Database for BM@N experiment data handling

    NASA Astrophysics Data System (ADS)

    Gertsenberger, Konstantin; Rogachevsky, Oleg

    2018-04-01

    The article describes the developed Unified Database designed as a comprehensive relational data storage for the BM@N experiment at the Joint Institute for Nuclear Research in Dubna. The BM@N experiment, which is one of the main elements of the first stage of the NICA project, is a fixed target experiment at extracted Nuclotron beams of the Laboratory of High Energy Physics (LHEP JINR). The structure and purposes of the BM@N setup are briefly presented. The article considers the scheme of the Unified Database, its attributes and implemented features in detail. The use of the developed BM@N database provides correct multi-user access to actual information of the experiment for data processing. It stores information on the experiment runs, detectors and their geometries, different configuration, calibration and algorithm parameters used in offline data processing. An important part of any database - user interfaces are presented.

  12. Database usage and performance for the Fermilab Run II experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonham, D.; Box, D.; Gallas, E.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less

  13. On-Line Database of Vibration-Based Damage Detection Experiments

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Doebling, Scott W.; Kholwad, Tina D.

    2000-01-01

    This paper describes a new, on-line bibliographic database of vibration-based damage detection experiments. Publications in the database discuss experiments conducted on actual structures as well as those conducted with simulated data. The database can be searched and sorted in many ways, and it provides photographs of test structures when available. It currently contains 100 publications, which is estimated to be about 5-10% of the number of papers written to date on this subject. Additional entries are forthcoming. This database is available for public use on the Internet at the following address: http://sdbpappa-mac.larc.nasa.gov. Click on the link named "dd_experiments.fp3" and then type "guest" as the password. No user name is required.

  14. Database Systems and Oracle: Experiences and Lessons Learned

    ERIC Educational Resources Information Center

    Dunn, Deborah

    2005-01-01

    In a tight job market, IT professionals with database experience are likely to be in great demand. Companies need database personnel who can help improve access to and security of data. The events of September 11 have increased business' awareness of the need for database security, backup, and recovery procedures. It is our responsibility to…

  15. Intelligent communication assistant for databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobson, G.; Shaked, V.; Rowley, S.

    1983-01-01

    An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.

  16. Ethics across the computer science curriculum: privacy modules in an introductory database course.

    PubMed

    Appel, Florence

    2005-10-01

    This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.

  17. TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childerson, M.T.; Fujita, R.K.

    1985-01-01

    A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less

  18. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  19. Results of tests in the NASA/LaRC 31-inch CFHT on an 0.010-scale model (32-OT) of the space shuttle configuration 3 to determine the RCS jet flowfield interaction effects on aerodynamic characteristics (IA60/0A105), volume 2

    NASA Technical Reports Server (NTRS)

    Thornton, D. E.

    1974-01-01

    Tests were conducted in the NASA Langley Research Center 31-inch continuous flow hypersonic wind tunnel from 14 February to 22 February 1974, to determine RCS jet interaction effect on the hypersonic aerodynamic and stability and control characteristics prior to RTLS abort separation. The model used was an 0.010-scale replica of the space shuttle vehicle configuration 3. Hypersonic stability data were obtained from tests at Mach 10.3 and dynamic pressure of 150 psf for the intergrated orbiter and external tank and the orbiter alone. RCS modes of pitch, yaw, and roll at free flight dynamic pressure simulation of 7, 20, and 50 psf were investigated. The effects of speedbrake, bodyflap, elevon, and aileron deflections were also investigated.

  20. Short Pulse Laser Absorption and Energy Partition at Relativistic Laser Intensities

    NASA Astrophysics Data System (ADS)

    Ping, Yuan

    2005-10-01

    We present the first absorption measurements at laser intensity between 10^17 to 10^20 W/cm^2 using an intergrating sphere and a suite of diagnostics that measures scale length, hot electrons and laser harmonics. A much-enhanced absorption in the regime of relativestic electron heating was observed. Furthermore, we present measurements on the partitioning of absorbed laser energy into thermal and non-thermal electrons when illuminating solid targets from 10^17 to 10^19 W/cm^2. This was measured using a sub-picosecond x-ray streak camera interfaced to a dual crystal von H'amos crystal spectrograph, a spherical crystal x-ray imaging spectrometer, an electron spectrometer and optical spectrometer. Our data suggests an intensity dependent energy-coupling transition with greater energy portion into non-thermal electrons that rapidly transition to thermal electrons. The details of these experimental results and modeling simulations will be presented.

  1. Longwall Guidance and Control Development

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The longwall guidance and control (G&C) system was evaluated to determine which systems and subsystems lent themselves to automatic control in the mining of coal. The upper coal/shale interface was identified as the reference for a vertical G&C system, with two sensors (the natural backgound and the sensitized pick) being used to locate and track this boundary. In order to insure a relatively smooth recession surface (roof and floor of the excavated seam), a last and present cut measuring instrument (acoustic sensor) was used. Potentiometers were used to measure elevations of the shearer arms. The intergration of these components comprised the vertical control system (pitch control). Yaw and roll control were incorporated into a face alignment system which was designed to keep the coal face normal to its external boundaries. Numerous tests, in the laboratory and in the field, have confirmed the feasibility of automatic horizon control, as well as determining the face alignment.

  2. Interaction Effects of Social Isolation and Peripheral Work Position on Risk of Disability Pension: A Prospective Study of Swedish Women and Men.

    PubMed

    Gustafsson, Klas; Marklund, Staffan; Aronsson, Gunnar; Wikman, Anders; Floderus, Birgitta

    2015-01-01

    The study examines various combinations of levels of social isolation in private life and peripheral work position as predictors of disability pension (DP). A second aim was to test the potential interaction effects (above additivity) of social isolation and peripheral work position on the future risk of DP, and to provide results for men and women by age. The study was based on a sample of 45567 women and men from the Swedish population who had been interviewed between 1992 and 2007. Further information on DP and diagnoses was obtained from the Swedish Social Insurance Agency's database (1993-2011). The studied predictors were related to DP using Cox's proportional hazard regression. The analyses were stratified on sex and age (20-39 years, 40-64 years), with control for selected confounders. Increased risks of DP were found for most combinations of social isolation and peripheral work position in all strata. The hazard ratios (HRs) for joint exposure to high degree of social isolation and a peripheral work position were particularly strong among men aged 20-39 (HR 5.70; CI 95% 3.74-8.69) and women aged 20-39 (HR 4.07; CI 2.99-5.56). An interaction effect from combined exposure was found for women in both age groups as well as a tendency in the same direction among young men. However, after confounder control the effects did not reach significance. Individuals who were socially isolated and in a peripheral work position had an increased risk of future DP. The fact that an interaction effect was found among women indicates that a combination of social isolation and peripheral work position may reinforce adverse health effects. There was no evidence that a peripheral work position can be compensated by a high degree of social intergration in private life.

  3. Planform: an application and database of graph-encoded planarian regenerative experiments.

    PubMed

    Lobo, Daniel; Malone, Taylor J; Levin, Michael

    2013-04-15

    Understanding the mechanisms governing the regeneration capabilities of many organisms is a fundamental interest in biology and medicine. An ever-increasing number of manipulation and molecular experiments are attempting to discover a comprehensive model for regeneration, with the planarian flatworm being one of the most important model species. Despite much effort, no comprehensive, constructive, mechanistic models exist yet, and it is now clear that computational tools are needed to mine this huge dataset. However, until now, there is no database of regenerative experiments, and the current genotype-phenotype ontologies and databases are based on textual descriptions, which are not understandable by computers. To overcome these difficulties, we present here Planform (Planarian formalization), a manually curated database and software tool for planarian regenerative experiments, based on a mathematical graph formalism. The database contains more than a thousand experiments from the main publications in the planarian literature. The software tool provides the user with a graphical interface to easily interact with and mine the database. The presented system is a valuable resource for the regeneration community and, more importantly, will pave the way for the application of novel artificial intelligence tools to extract knowledge from this dataset. The database and software tool are freely available at http://planform.daniel-lobo.com.

  4. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.

    2004-05-12

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less

  5. How Do You Like Your Science, Wet or Dry? How Two Lab Experiences Influence Student Understanding of Science Concepts and Perceptions of Authentic Scientific Practice

    PubMed Central

    Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon

    2017-01-01

    This study examines how two kinds of authentic research experiences related to smoking behavior—genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)—influence students’ perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students’ attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. PMID:28572181

  6. The Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Kirby, Michael

    2014-06-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.

  7. CB Database: A change blindness database for objects in natural indoor scenes.

    PubMed

    Sareen, Preeti; Ehinger, Krista A; Wolfe, Jeremy M

    2016-12-01

    Change blindness has been a topic of interest in cognitive sciences for decades. Change detection experiments are frequently used for studying various research topics such as attention and perception. However, creating change detection stimuli is tedious and there is no open repository of such stimuli using natural scenes. We introduce the Change Blindness (CB) Database with object changes in 130 colored images of natural indoor scenes. The size and eccentricity are provided for all the changes as well as reaction time data from a baseline experiment. In addition, we have two specialized satellite databases that are subsets of the 130 images. In one set, changes are seen in rooms or in mirrors in those rooms (Mirror Change Database). In the other, changes occur in a room or out a window (Window Change Database). Both the sets have controlled background, change size, and eccentricity. The CB Database is intended to provide researchers with a stimulus set of natural scenes with defined stimulus parameters that can be used for a wide range of experiments. The CB Database can be found at http://search.bwh.harvard.edu/new/CBDatabase.html .

  8. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  9. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  11. Database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  12. 77 FR 6535 - Notice of Intent To Seek Approval To Collect Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... information from participants: Contact information, affiliation, and database searching experience... and fax numbers, and email address. Six questions are asked regarding: database searching experience...

  13. Workstation Analytics in Distributed Warfighting Experimentation: Results from Coalition Attack Guidance Experiment 3A

    DTIC Science & Technology

    2014-06-01

    central location. Each of the SQLite databases are converted and stored in one MySQL database and the pcap files are parsed to extract call information...from the specific communications applications used during the experiment. This extracted data is then stored in the same MySQL database. With all...rhythm of the event. Figure 3 demonstrates the application usage over the course of the experiment for the EXDIR. As seen, the EXDIR spent the majority

  14. The Fabric for Frontier Experiments Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Michael

    2014-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less

  15. A Web-Based Multi-Database System Supporting Distributed Collaborative Management and Sharing of Microarray Experiment Information

    PubMed Central

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488

  16. Development of Speckle Interferometry Algorithm and System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-05-25

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is opticallymore » mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.« less

  17. Computer considerations for real time simulation of a generalized rotor model

    NASA Technical Reports Server (NTRS)

    Howe, R. M.; Fogarty, L. E.

    1977-01-01

    Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.

  18. Time-Critical Database Conditions Data-Handling for the CMS Experiment

    NASA Astrophysics Data System (ADS)

    De Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2011-08-01

    Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.

  19. How Do You Like Your Science, Wet or Dry? How Two Lab Experiences Influence Student Understanding of Science Concepts and Perceptions of Authentic Scientific Practice.

    PubMed

    Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W; Levias, Sheldon

    2017-01-01

    This study examines how two kinds of authentic research experiences related to smoking behavior-genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)-influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students' attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. © 2017 M. Munn et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  20. Transition Documentation on a Three-Element High-Lift Configuration at High Reynolds Numbers--Database. [conducted in the Langley Low Turbulence Pressure Tunnel

    NASA Technical Reports Server (NTRS)

    Bertelrud, Arild; Johnson, Sherylene; Anders, J. B. (Technical Monitor)

    2002-01-01

    A 2-D (two dimensional) high-lift system experiment was conducted in August of 1996 in the Low Turbulence Pressure Tunnel at NASA Langley Research Center, Hampton, VA. The purpose of the experiment was to obtain transition measurements on a three element high-lift system for CFD (computational fluid dynamics) code validation studies. A transition database has been created using the data from this experiment. The present report details how the hot-film data and the related pressure data are organized in the database. Data processing codes to access the data in an efficient and reliable manner are described and limited examples are given on how to access the database and store acquired information.

  1. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    NASA Astrophysics Data System (ADS)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  2. Visualizing the semantic content of large text databases using text maps

    NASA Technical Reports Server (NTRS)

    Combs, Nathan

    1993-01-01

    A methodology for generating text map representations of the semantic content of text databases is presented. Text maps provide a graphical metaphor for conceptualizing and visualizing the contents and data interrelationships of large text databases. Described are a set of experiments conducted against the TIPSTER corpora of Wall Street Journal articles. These experiments provide an introduction to current work in the representation and visualization of documents by way of their semantic content.

  3. Development of the geometry database for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Akishina, E. P.; Alexandrov, E. I.; Alexandrov, I. N.; Filozova, I. A.; Friese, V.; Ivanov, V. V.

    2018-01-01

    The paper describes the current state of the Geometry Database (Geometry DB) for the CBM experiment. The main purpose of this database is to provide convenient tools for: (1) managing the geometry modules; (2) assembling various versions of the CBM setup as a combination of geometry modules and additional files. The CBM users of the Geometry DB may use both GUI (Graphical User Interface) and API (Application Programming Interface) tools for working with it.

  4. A reference system for animal biometrics: application to the northern leopard frog

    USGS Publications Warehouse

    Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.

    2014-01-01

    Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.

  5. Reflecting on the challenges of building a rich interconnected metadata database to describe the experiments of phase six of the coupled climate model intercomparison project (CMIP6) for the Earth System Documentation Project (ES-DOC) and anticipating the opportunities that tooling and services based on rich metadata can provide.

    NASA Astrophysics Data System (ADS)

    Pascoe, C. L.

    2017-12-01

    The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".

  6. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments.

    PubMed

    Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua

    2018-01-04

    Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    PubMed Central

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  8. Experimental Database with Baseline CFD Solutions: 2-D and Axisymmetric Hypersonic Shock-Wave/Turbulent-Boundary-Layer Interactions

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.; Brown, James L.; Gnoffo, Peter A.

    2013-01-01

    A database compilation of hypersonic shock-wave/turbulent boundary layer experiments is provided. The experiments selected for the database are either 2D or axisymmetric, and include both compression corner and impinging type SWTBL interactions. The strength of the interactions range from attached to incipient separation to fully separated flows. The experiments were chosen based on criterion to ensure quality of the datasets, to be relevant to NASA's missions and to be useful for validation and uncertainty assessment of CFD Navier-Stokes predictive methods, both now and in the future. An emphasis on datasets selected was on surface pressures and surface heating throughout the interaction, but include some wall shear stress distributions and flowfield profiles. Included, for selected cases, are example CFD grids and setup information, along with surface pressure and wall heating results from simulations using current NASA real-gas Navier-Stokes codes by which future CFD investigators can compare and evaluate physics modeling improvements and validation and uncertainty assessments of future CFD code developments. The experimental database is presented tabulated in the Appendices describing each experiment. The database is also provided in computer-readable ASCII files located on a companion DVD.

  9. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    NASA Astrophysics Data System (ADS)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  10. Toward Data-Driven Radiology Education-Early Experience Building Multi-Institutional Academic Trainee Interpretation Log Database (MATILDA).

    PubMed

    Chen, Po-Hao; Loehfelm, Thomas W; Kamer, Aaron P; Lemmon, Andrew B; Cook, Tessa S; Kohli, Marc D

    2016-12-01

    The residency review committee of the Accreditation Council of Graduate Medical Education (ACGME) collects data on resident exam volume and sets minimum requirements. However, this data is not made readily available, and the ACGME does not share their tools or methodology. It is therefore difficult to assess the integrity of the data and determine if it truly reflects relevant aspects of the resident experience. This manuscript describes our experience creating a multi-institutional case log, incorporating data from three American diagnostic radiology residency programs. Each of the three sites independently established automated query pipelines from the various radiology information systems in their respective hospital groups, thereby creating a resident-specific database. Then, the three institutional resident case log databases were aggregated into a single centralized database schema. Three hundred thirty residents and 2,905,923 radiologic examinations over a 4-year span were catalogued using 11 ACGME categories. Our experience highlights big data challenges including internal data heterogeneity and external data discrepancies faced by informatics researchers.

  11. Developing Visualization Support System for Teaching/Learning Database Normalization

    ERIC Educational Resources Information Center

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  12. Multiculturalism as an element of Lublin's tourism product

    NASA Astrophysics Data System (ADS)

    Rodzoś, Jolanta; Szczęsna, Joanna

    2012-01-01

    Taking into account both the cultural resources and the demand for a tourist offer with elements of cultural heritage, it can be stated that creating an intergrated tourism product based on Lublin's multicultural character is possible and needed. Traces of existence of various ethnic, national, religious groups are clear and vivid and may become the basis of an interesting offer for tourists. They are at the same time original and unique enough to become the trademark of the city. The realization of such a product can make Lublin the center of historical multiculturalism. The products could become Lublin's distinctive feature on the Polish and European map. The addressees of such a product could be tourists but also Lublin's citizens themselves, for whom it would be a great opportunity to learn about the past of their city. Multicultural heritage allows to create an offer that will help tourists to engage themselves actively in the cognitive process of discovering the city. Taking part in a cultural-religious event of a particular cultural group, staying in a stylish hotel, or a meal in a restaurant offering some traditional cuisine will activate tourists in an emotional way and will offer an opportunity to experience reality in a new way. This means of presenting reality is needed these days. There is a great need for active methods of presenting history, traditions, and customs. The Lublin of today offers too many traditional means of presentation, in which tourists are just passive observers and listeners. Broadening the current offer will not only promote Lublin's multicultural heritage but will also become a chance of creating a new image of its tourism.

  13. Supersonic and hypersonic shock/boundary-layer interaction database

    NASA Technical Reports Server (NTRS)

    Settles, Gary S.; Dodson, Lori J.

    1994-01-01

    An assessment is given of existing shock wave/tubulent boundary-layer interaction experiments having sufficient quality to guide turbulence modeling and code validation efforts. Although the focus of this work is hypersonic, experiments at Mach numbers as low as 3 were considered. The principal means of identifying candidate studies was a computerized search of the AIAA Aerospace Database. Several hundred candidate studies were examined and over 100 of these were subjected to a rigorous set of acceptance criteria for inclusion in the data-base. Nineteen experiments were found to meet these criteria, of which only seven were in the hypersonic regime (M is greater than 5).

  14. A database for the analysis of immunity genes in Drosophila: PADMA database.

    PubMed

    Lee, Mark J; Mondal, Ariful; Small, Chiyedza; Paddibhatla, Indira; Kawaguchi, Akira; Govind, Shubha

    2011-01-01

    While microarray experiments generate voluminous data, discerning trends that support an existing or alternative paradigm is challenging. To synergize hypothesis building and testing, we designed the Pathogen Associated Drosophila MicroArray (PADMA) database for easy retrieval and comparison of microarray results from immunity-related experiments (www.padmadatabase.org). PADMA also allows biologists to upload their microarray-results and compare it with datasets housed within PADMA. We tested PADMA using a preliminary dataset from Ganaspis xanthopoda-infected fly larvae, and uncovered unexpected trends in gene expression, reshaping our hypothesis. Thus, the PADMA database will be a useful resource to fly researchers to evaluate, revise, and refine hypotheses.

  15. Ontology based heterogeneous materials database integration and semantic query

    NASA Astrophysics Data System (ADS)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  16. Management system for the SND experiments

    NASA Astrophysics Data System (ADS)

    Pugachev, K.; Korol, A.

    2017-09-01

    A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).

  17. Combined experiment Phase 2 data characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.S.; Shipley, D.E.; Young, T.S.

    1995-11-01

    The National Renewable Energy Laboratory`s ``Combined Experiment`` has yielded a large quantity of experimental data on the operation of a downwind horizontal axis wind turbine under field conditions. To fully utilize this valuable resource and identify particular episodes of interest, a number of databases were created that characterize individual data events and rotational cycles over a wide range of parameters. Each of the 59 five-minute data episodes collected during Phase 11 of the Combined Experiment have been characterized by the mean, minimum, maximum, and standard deviation of all data channels, except the blade surface pressures. Inflow condition, aerodynamic force coefficient,more » and minimum leading edge pressure coefficient databases have also been established, characterizing each of nearly 21,000 blade rotational cycles. In addition, a number of tools have been developed for searching these databases for particular episodes of interest. Due to their extensive size, only a portion of the episode characterization databases are included in an appendix, and examples of the cycle characterization databases are given. The search tools are discussed and the FORTRAN or C code for each is included in appendices.« less

  18. New Dimensions for the Online Catalog: The Dartmouth College Library Experience [and] TOC/DOC at Caltech: Evolution of Citation Access Online [and] Locally Loaded Databases in Arizona State University's Online Catalog Using the CARL System.

    ERIC Educational Resources Information Center

    Klemperer, Katharina; And Others

    1989-01-01

    Each of three articles describes an academic library's online catalog that includes locally created databases. Topics covered include database and software selection; systems design and development; database producer negotiations; problems encountered during implementation; database loading; training and documentation; and future plans. (CLB)

  19. The radiopurity.org material database

    NASA Astrophysics Data System (ADS)

    Cooley, J.; Loach, J. C.; Poon, A. W. P.

    2018-01-01

    The database at http://www.radiopurity.org is the world's largest public database of material radio-purity mea-surements. These measurements are used by members of the low-background physics community to build experiments that search for neutrinos, neutrinoless double-beta decay, WIMP dark matter, and other exciting physics. This paper summarizes the current status and the future plan of this database.

  20. An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.

    ERIC Educational Resources Information Center

    Chen, I-Min A.; Markowitz, Victor M.

    1995-01-01

    Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…

  1. Long Duration Exposure Facility (LDEF) optical systems SIG summary and database

    NASA Astrophysics Data System (ADS)

    Bohnhoff-Hlavacek, Gail

    1992-09-01

    The main objectives of the Long Duration Exposure Facility (LDEF) Optical Systems Special Investigative Group (SIG) Discipline are to develop a database of experimental findings on LDEF optical systems and elements hardware, and provide an optical system overview. Unlike the electrical and mechanical disciplines, the optics effort relies primarily on the testing of hardware at the various principal investigator's laboratories, since minimal testing of optical hardware was done at Boeing. This is because all space-exposed optics hardware are part of other individual experiments. At this time, all optical systems and elements testing by experiment investigator teams is not complete, and in some cases has hardly begun. Most experiment results to date, document observations and measurements that 'show what happened'. Still to come from many principal investigators is a critical analysis to explain 'why it happened' and future design implications. The original optical system related concerns and the lessons learned at a preliminary stage in the Optical Systems Investigations are summarized. The design of the Optical Experiments Database and how to acquire and use the database to review the LDEF results are described.

  2. Long Duration Exposure Facility (LDEF) optical systems SIG summary and database

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    The main objectives of the Long Duration Exposure Facility (LDEF) Optical Systems Special Investigative Group (SIG) Discipline are to develop a database of experimental findings on LDEF optical systems and elements hardware, and provide an optical system overview. Unlike the electrical and mechanical disciplines, the optics effort relies primarily on the testing of hardware at the various principal investigator's laboratories, since minimal testing of optical hardware was done at Boeing. This is because all space-exposed optics hardware are part of other individual experiments. At this time, all optical systems and elements testing by experiment investigator teams is not complete, and in some cases has hardly begun. Most experiment results to date, document observations and measurements that 'show what happened'. Still to come from many principal investigators is a critical analysis to explain 'why it happened' and future design implications. The original optical system related concerns and the lessons learned at a preliminary stage in the Optical Systems Investigations are summarized. The design of the Optical Experiments Database and how to acquire and use the database to review the LDEF results are described.

  3. Web application for detailed real-time database transaction monitoring for CMS condition data

    NASA Astrophysics Data System (ADS)

    de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2012-12-01

    In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.

  4. Crosstalk between intracellular and extracellular signals regulating interneuron production, migration and integration into the cortex.

    PubMed

    Peyre, Elise; Silva, Carla G; Nguyen, Laurent

    2015-01-01

    During embryogenesis, cortical interneurons are generated by ventral progenitors located in the ganglionic eminences of the telencephalon. They travel along multiple tangential paths to populate the cortical wall. As they reach this structure they undergo intracortical dispersion to settle in their final destination. At the cellular level, migrating interneurons are highly polarized cells that extend and retract processes using dynamic remodeling of microtubule and actin cytoskeleton. Different levels of molecular regulation contribute to interneuron migration. These include: (1) Extrinsic guidance cues distributed along migratory streams that are sensed and integrated by migrating interneurons; (2) Intrinsic genetic programs driven by specific transcription factors that grant specification and set the timing of migration for different subtypes of interneurons; (3) Adhesion molecules and cytoskeletal elements/regulators that transduce molecular signalings into coherent movement. These levels of molecular regulation must be properly integrated by interneurons to allow their migration in the cortex. The aim of this review is to summarize our current knowledge of the interplay between microenvironmental signals and cell autonomous programs that drive cortical interneuron porduction, tangential migration, and intergration in the developing cerebral cortex.

  5. Syk Mediates BCR- and CD40-Signaling Intergration during B Cell Activation

    PubMed Central

    Ying, Haiyan; Li, Zhenping; Yang, Lifen; Zhang, Jian

    2010-01-01

    CD40 is essential for optimal B cell activation. It has been shown that CD40 stimulation can augment BCR-induced B cell responses, but the molecular mechanism(s) by which CD40 regulates BCR signaling is poorly understood. In this report, we attempted to characterize the signaling synergy between BCR- and CD40-mediated pathways during B cell activation. We found that spleen tyrosine kinase (Syk) is involved in CD40 signaling, and is synergistically activated in B cells in response to BCR/CD40 costimulation. CD40 stimulation alone also activates B cell linker (BLNK), Bruton tyrosine kinase (Btk), and Vav-2 downstream of Syk, and significantly enhances BCR-induced formation of complex consisting of, Vav-2, Btk, BLNK, and phospholipase C-gamma2 (PLC-γ2) leading to activation of extracellular signal-regulated kinase (ERK), p38 mitogen-activated protein kinase, Akt, and NF-κB required for optimal B cell activation. Therefore, our data suggest that CD40 can strengthen BCR-signaling pathway and quantitatively modify BCR signaling during B cell activation. PMID:21074890

  6. Blended Wing Body Concept Development with Open Rotor Engine Intergration

    NASA Technical Reports Server (NTRS)

    Pitera, David M.; DeHaan, Mark; Brown, Derrell; Kawai, Ronald T.; Hollowell, Steve; Camacho, Peter; Bruns, David; Rawden, Blaine K.

    2011-01-01

    The purpose of this study is to perform a systems analysis of a Blended Wing Body (BWB) open rotor concept at the conceptual design level. This concept will be utilized to estimate overall noise and fuel burn performance, leveraging recent test data. This study will also investigate the challenge of propulsion airframe installation of an open rotor engine on a BWB configuration. Open rotor engines have unique problems relative to turbofans. The rotors are open, exposed to flow conditions outside of the engine. The flow field that the rotors are immersed in may be higher than the free stream flow and it may not be uniform, both of these characteristics could increase noise and decrease performance. The rotors sometimes cause changes in the flow conditions imposed on aircraft surfaces. At high power conditions such as takeoff and climb out, the stream tube of air that goes through the rotors contracts rapidly causing the boundary layer on the body upper surface to go through an adverse pressure gradient which could result with separated airflow. The BWB / Open Rotor configuration must be designed to mitigate these problems.

  7. Snow Ecology

    NASA Astrophysics Data System (ADS)

    Jones, H. G.; Pomeroy, J. W.; Walker, D. A.; Hoham, R. W.

    2001-01-01

    In this volume, a multidisciplinary group of acknowledged experts fully intergrate the physical, chemical, and biological sciences to provide a complete understanding of the interrelationships between snow structure and life. This volume opens a new perspecitve on snow cover as a habitat for organisms under extreme environmental conditions and as a key factor in the ecology of much of the Earth's surface. The contributors describe the fundamental physical and small-scale chemical processes that characterize the evolution of snow and their influence on the life cycles of true snow organisms and the biota of cold regions with extended snow cover. The book further expands on the role of snow in the biosphere by the study of the relationship between snow and climate and the paleo-ecological evidence for the influence of past snow regimes on plant communities. Snow Ecology will form a main textbook on advanced courses in biology, ecology, geography, environmental science, and earth science where an important component is devoted to the study of the cryosphere. It will also be useful as a reference text for graduate students, researchers, and professionals at academic institutions and in government and nongovernmental agencies with environmental concerns.

  8. Cytotoxicity and mitogenicity assays with real-time and label-free monitoring of human granulosa cells with an impedance-based signal processing technology intergrating micro-electronics and cell biology.

    PubMed

    Oktem, Ozgur; Bildik, Gamze; Senbabaoglu, Filiz; Lack, Nathan A; Akin, Nazli; Yakar, Feridun; Urman, Defne; Guzel, Yilmaz; Balaban, Basak; Iwase, Akira; Urman, Bulent

    2016-04-01

    A recently developed technology (xCelligence) integrating micro-electronics and cell biology allows real-time, uninterrupted and quantitative analysis of cell proliferation, viability and cytotoxicity by measuring the electrical impedance of the cell population in the wells without using any labeling agent. In this study we investigated if this system is a suitable model to analyze the effects of mitogenic (FSH) and cytotoxic (chemotherapy) agents with different toxicity profiles on human granulosa cells in comparison to conventional methods of assessing cell viability, DNA damage, apoptosis and steroidogenesis. The system generated the real-time growth curves of the cells, and determined their doubling times, mean cell indices and generated dose-response curves after exposure to cytotoxic and mitogenic stimuli. It accurately predicted the gonadotoxicity of the drugs and distinguished less toxic agents (5-FU and paclitaxel) from more toxic ones (cisplatin and cyclophosphamide). This platform can be a useful tool for specific end-point assays in reproductive toxicology. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Production of D-tagatose and bioethanol from onion waste by an intergrating bioprocess.

    PubMed

    Kim, Ho Myeong; Song, Younho; Wi, Seung Gon; Bae, Hyeun-Jong

    2017-10-20

    The rapid increase of agricultural waste is becoming a burgeoning problem and considerable efforts are being made by numerous researchers to convert it into a high-value resource material. Onion waste is one of the biggest issues in a world of dwindling resource. In this study, the potential of onion juice residue (OJR) for producing valuable rare sugar or bioethanol was evaluated. Purified Paenibacillus polymyxaL-arabinose isomerase (PPAI) has a molecular weight of approximately 53kDa, and exhibits maximal activity at 30°C and pH 7.5 in the presence of 0.8mM Mn 2+ . PPAI can produce 0.99g D-tagatose from 10g OJR. In order to present another application for OJR, we produced 1.56g bioethanol from 10g OJR through a bioconversion and fermentation process. These results indicate that PPAI can be used for producing rare sugars in an industrial setting, and OJR can be converted to D-tagatose and bioethanol. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. [Intergration and epression of porcine endogenous retrovinus in the immortal cell line of Banna Minipig Inberd Line-Mesenhymal Stem Cells].

    PubMed

    Yu, Ping; Liu, Jin; Zhang, Li; Li, Shrng-Fu; Bu, Hong; Li, You-Ping; Cheng, Jing-Qui; Lu, Yan-Rong; Long, Dan

    2005-11-01

    To detect the integration and expression of porcine endogenous retrovirus (PERV) in the immortal cell line of Banna Minipig Inbred Line-Mesenchymal Stem Cells (BMI-MSCs). DNA and total RNA of the immortal cell line of BMI-MSCs were extracted and PCR, RT-PCR were performed to detect PERV-gag, pol and env gene, and the type of PERV was also detected. PERV-gag, pol and env gene were all detected in the primary culture and immortal cell line (passage 150 and passage 180) of BMI-MSCs, and the type of PERV was PERV-A, B. Functional expression of PERV-gag and pol mRNA was also detected. In this laboratory, PERV was not lost during the proceeding of pig inbred and since has been in long-term culture of pig cells in vitro. PERV has integrated into the genome of its natural host, and virus mRNA can effectively express. So it is very essential to evaluate the possibility of xenozoonoses in pig-to-human xenotransplantation.

  11. Influence of rumen protozoa on methane emission in ruminants: a meta-analysis approach.

    PubMed

    Guyader, J; Eugène, M; Nozière, P; Morgavi, D P; Doreau, M; Martin, C

    2014-11-01

    A meta-analysis was conducted to evaluate the effects of protozoa concentration on methane emission from ruminants. A database was built from 59 publications reporting data from 76 in vivo experiments. The experiments included in the database recorded methane production and rumen protozoa concentration measured on the same groups of animals. Quantitative data such as diet chemical composition, rumen fermentation and microbial parameters, and qualitative information such as methane mitigation strategies were also collected. In the database, 31% of the experiments reported a concomitant reduction of both protozoa concentration and methane emission (g/kg dry matter intake). Nearly all of these experiments tested lipids as methane mitigation strategies. By contrast, 21% of the experiments reported a variation in methane emission without changes in protozoa numbers, indicating that methanogenesis is also regulated by other mechanisms not involving protozoa. Experiments that used chemical compounds as an antimethanogenic treatment belonged to this group. The relationship between methane emission and protozoa concentration was studied with a variance-covariance model, with experiment as a fixed effect. The experiments included in the analysis had a within-experiment variation of protozoa concentration higher than 5.3 log10 cells/ml corresponding to the average s.e.m. of the database for this variable. To detect potential interfering factors for the relationship, the influence of several qualitative and quantitative secondary factors was tested. This meta-analysis showed a significant linear relationship between methane emission and protozoa concentration: methane (g/kg dry matter intake)=-30.7+8.14×protozoa (log10 cells/ml) with 28 experiments (91 treatments), residual mean square error=1.94 and adjusted R 2=0.90. The proportion of butyrate in the rumen positively influenced the least square means of this relationship.

  12. NASA Cold Land Processes Experiment (CLPX 2002/03): Ground-based and near-surface meteorological observations

    Treesearch

    Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter

    2009-01-01

    A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...

  13. Quantifying Precision and Availability of Location Memory in Everyday Pictures and Some Implications for Picture Database Design

    ERIC Educational Resources Information Center

    Lansdale, Mark W.; Oliff, Lynda; Baguley, Thom S.

    2005-01-01

    The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is…

  14. Tufts Health Sciences Database: Lessons, Issues, and Opportunities.

    ERIC Educational Resources Information Center

    Lee, Mary Y.; Albright, Susan A.; Alkasab, Tarik; Damassa, David A.; Wang, Paul J.; Eaton, Elizabeth K.

    2003-01-01

    Describes a seven-year experience with developing the Tufts Health Sciences Database, a database-driven information management system that combines the strengths of a digital library, content delivery tools, and curriculum management. Identifies major effects on teaching and learning. Also addresses issues of faculty development, copyright and…

  15. The Biological Macromolecule Crystallization Database and NASA Protein Crystal Growth Archive

    PubMed Central

    Gilliland, Gary L.; Tung, Michael; Ladner, Jane

    1996-01-01

    The NIST/NASA/CARB Biological Macromolecule Crystallization Database (BMCD), NIST Standard Reference Database 21, contains crystal data and crystallization conditions for biological macromolecules. The database entries include data abstracted from published crystallographic reports. Each entry consists of information describing the biological macromolecule crystallized and crystal data and the crystallization conditions for each crystal form. The BMCD serves as the NASA Protein Crystal Growth Archive in that it contains protocols and results of crystallization experiments undertaken in microgravity (space). These database entries report the results, whether successful or not, from NASA-sponsored protein crystal growth experiments in microgravity and from microgravity crystallization studies sponsored by other international organizations. The BMCD was designed as a tool to assist x-ray crystallographers in the development of protocols to crystallize biological macromolecules, those that have previously been crystallized, and those that have not been crystallized. PMID:11542472

  16. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  17. A database of charged cosmic rays

    NASA Astrophysics Data System (ADS)

    Maurin, D.; Melot, F.; Taillet, R.

    2014-09-01

    Aims: This paper gives a description of a new online database and associated online tools (data selection, data export, plots, etc.) for charged cosmic-ray measurements. The experimental setups (type, flight dates, techniques) from which the data originate are included in the database, along with the references to all relevant publications. Methods: The database relies on the MySQL5 engine. The web pages and queries are based on PHP, AJAX and the jquery, jquery.cluetip, jquery-ui, and table-sorter third-party libraries. Results: In this first release, we restrict ourselves to Galactic cosmic rays with Z ≤ 30 and a kinetic energy per nucleon up to a few tens of TeV/n. This corresponds to more than 200 different sub-experiments (i.e., different experiments, or data from the same experiment flying at different times) in as many publications. Conclusions: We set up a cosmic-ray database (CRDB) and provide tools to sort and visualise the data. New data can be submitted, providing the community with a collaborative tool to archive past and future cosmic-ray measurements. http://lpsc.in2p3.fr/crdb; Contact: crdatabase@lpsc.in2p3.fr

  18. Second chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1984 and by the National Toxicology Program through May 1986.

    PubMed Central

    Gold, L S; Slone, T H; Backman, G M; Magaw, R; Da Costa, M; Lopipero, P; Blumenthal, M; Ames, B N

    1987-01-01

    This paper is the second chronological supplement to the Carcinogenic Potency Database, published earlier in this journal (1,2,4). We report here results of carcinogenesis bioassays published in the general literature between January 1983 and December 1984, and in Technical Reports of the National Cancer Institute/National Toxicology Program between January 1983 and May 1986. This supplement includes results of 525 long-term, chronic experiments of 199 test compounds, and reports the same information about each experiment in the same plot format as the earlier papers: e.g., the species and strain of test animal, the route and duration of compound administration, dose level and other aspects of experimental protocol, histopathology and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, author's opinion about carcinogenicity, and literature citation. We refer the reader to the 1984 publications for a description of the numerical index of carcinogenic potency (TD50), a guide to the plot of the database, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The three plots of the database are to be used together, since results of experiments published in earlier plots are not repeated. Taken together, the three plots include results for more than 3500 experiments on 975 chemicals. Appendix 14 is an index to all chemicals in the database and indicates which plot(s) each chemical appears in. PMID:3691431

  19. Evaluation of Database Coverage: A Comparison of Two Methodologies.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1982-01-01

    Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…

  20. A rotorcraft flight database for validation of vision-based ranging algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1992-01-01

    A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.

  1. Time-critical Database Condition Data Handling in the CMS Experiment During the First Data Taking Period

    NASA Astrophysics Data System (ADS)

    Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio

    2011-12-01

    Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.

  2. Impact assessment of climate change on tourism in the Pacific small islands based on the database of long-term high-resolution climate ensemble experiments

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Utsumi, N.; Take, M.; Iida, A.

    2016-12-01

    This study aims to develop a new approach to assess the impact of climate change on the small oceanic islands in the Pacific. In the new approach, the change of the probabilities of various situations was projected with considering the spread of projection derived from ensemble simulations, instead of projecting the most probable situation. The database for Policy Decision making for Future climate change (d4PDF) is a database of long-term high-resolution climate ensemble experiments, which has the results of 100 ensemble simulations. We utilized the database for Policy Decision making for Future climate change (d4PDF), which was (a long-term and high-resolution database) composed of results of 100 ensemble experiments. A new methodology, Multi Threshold Ensemble Assessment (MTEA), was developed using the d4PDF in order to assess the impact of climate change. We focused on the impact of climate change on tourism because it has played an important role in the economy of the Pacific Islands. The Yaeyama Region, one of the tourist destinations in Okinawa, Japan, was selected as the case study site. Two kinds of impact were assessed: change in probability of extreme climate phenomena and tourist satisfaction associated with weather. The database of long-term high-resolution climate ensemble experiments and the questionnaire survey conducted by a local government were used for the assessment. The result indicated that the strength of extreme events would be increased, whereas the probability of occurrence would be decreased. This change should result in increase of the number of clear days and it could contribute to improve the tourist satisfaction.

  3. Publishing Your Database on CD-ROM for Profit: The FISHLIT and NISC Experience.

    ERIC Educational Resources Information Center

    Crampton, Margaret

    1995-01-01

    Details the development of the FISHLIT bibliographic database at the JLB Smith Institute of Ichthyology Library at Rhodes University (South Africa), and the subsequent CD-ROM publication of the database by NISC (National Information Services Corporation). Discusses the advantages of CD-ROM publication, costs and information service provision,…

  4. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  5. Naval Ship Database: Database Design, Implementation, and Schema

    DTIC Science & Technology

    2013-09-01

    incoming data. The solution allows database users to store and analyze data collected by navy ships in the Royal Canadian Navy ( RCN ). The data...understanding RCN jargon and common practices on a typical RCN vessel. This experience led to the development of several error detection methods to...data to be stored in the database. Mr. Massel has also collected data pertaining to day to day activities on RCN vessels that has been imported into

  6. A Brief Review of RNA–Protein Interaction Database Resources

    PubMed Central

    Yi, Ying; Zhao, Yue; Huang, Yan; Wang, Dong

    2017-01-01

    RNA–Protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA–Protein interactions and binding sites from experiments and predictions, RNA–Protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA–Protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA–Protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA–Protein interactions. PMID:29657278

  7. Experience in running relational databases on clustered storage

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Potocky, Miroslav

    2015-12-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  8. Online Analytical Processing (OLAP): A Fast and Effective Data Mining Tool for Gene Expression Databases

    PubMed Central

    2005-01-01

    Gene expression databases contain a wealth of information, but current data mining tools are limited in their speed and effectiveness in extracting meaningful biological knowledge from them. Online analytical processing (OLAP) can be used as a supplement to cluster analysis for fast and effective data mining of gene expression databases. We used Analysis Services 2000, a product that ships with SQLServer2000, to construct an OLAP cube that was used to mine a time series experiment designed to identify genes associated with resistance of soybean to the soybean cyst nematode, a devastating pest of soybean. The data for these experiments is stored in the soybean genomics and microarray database (SGMD). A number of candidate resistance genes and pathways were found. Compared to traditional cluster analysis of gene expression data, OLAP was more effective and faster in finding biologically meaningful information. OLAP is available from a number of vendors and can work with any relational database management system through OLE DB. PMID:16046824

  9. Monitoring tools of COMPASS experiment at CERN

    NASA Astrophysics Data System (ADS)

    Bodlak, M.; Frolov, V.; Huber, S.; Jary, V.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Tomsa, J.; Virius, M.

    2015-12-01

    This paper briefly introduces the data acquisition system of the COMPASS experiment and is mainly focused on the part that is responsible for the monitoring of the nodes in the whole newly developed data acquisition system of this experiment. The COMPASS is a high energy particle experiment with a fixed target located at the SPS of the CERN laboratory in Geneva, Switzerland. The hardware of the data acquisition system has been upgraded to use FPGA cards that are responsible for data multiplexing and event building. The software counterpart of the system includes several processes deployed in heterogenous network environment. There are two processes, namely Message Logger and Message Browser, taking care of monitoring. These tools handle messages generated by nodes in the system. While Message Logger collects and saves messages to the database, the Message Browser serves as a graphical interface over the database containing these messages. For better performance, certain database optimizations have been used. Lastly, results of performance tests are presented.

  10. Quantifying precision and availability of location memory in everyday pictures and some implications for picture database design.

    PubMed

    Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S

    2005-06-01

    The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA

  11. The Native Plant Propagation Protocol Database: 16 years of sharing information

    Treesearch

    R. Kasten Dumroese; Thomas D. Landis

    2016-01-01

    The Native Plant Propagation Protocol Database was launched in 2001 to provide an online mechanism for sharing information about growing native plants. It relies on plant propagators to upload their protocols (detailed directions for growing particular native plants) so that others may benefit from their experience. Currently the database has nearly 3000 protocols and...

  12. Computer Cataloging of Electronic Journals in Unstable Aggregator Databases: The Hong Kong Baptist University Library Experience.

    ERIC Educational Resources Information Center

    Li, Yiu-On; Leung, Shirley W.

    2001-01-01

    Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…

  13. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  14. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis

    PubMed Central

    Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet. PMID:28695067

  15. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis.

    PubMed

    Costa, Raquel L; Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet.

  16. Research on high availability architecture of SQL and NoSQL

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  17. Home Literacy Experiences and Early Childhood Disability: A Descriptive Study Using the National Household Education Surveys (NHES) Program Database

    ERIC Educational Resources Information Center

    Breit-Smith, Allison; Cabell, Sonia Q.; Justice, Laura M.

    2010-01-01

    Purpose: The present article illustrates how the National Household Education Surveys (NHES; U.S. Department of Education, 2009) database might be used to address questions of relevance to researchers who are concerned with literacy development among young children. Following a general description of the NHES database, a study is provided that…

  18. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  19. Construction of Database for Pulsating Variable Stars

    NASA Astrophysics Data System (ADS)

    Chen, B. Q.; Yang, M.; Jiang, B. W.

    2011-07-01

    A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.

  20. The Application and Future of Big Database Studies in Cardiology: A Single-Center Experience.

    PubMed

    Lee, Kuang-Tso; Hour, Ai-Ling; Shia, Ben-Chang; Chu, Pao-Hsien

    2017-11-01

    As medical research techniques and quality have improved, it is apparent that cardiovascular problems could be better resolved by more strict experiment design. In fact, substantial time and resources should be expended to fulfill the requirements of high quality studies. Many worthy ideas and hypotheses were unable to be verified or proven due to ethical or economic limitations. In recent years, new and various applications and uses of databases have received increasing attention. Important information regarding certain issues such as rare cardiovascular diseases, women's heart health, post-marketing analysis of different medications, or a combination of clinical and regional cardiac features could be obtained by the use of rigorous statistical methods. However, there are limitations that exist among all databases. One of the key essentials to creating and correctly addressing this research is through reliable processes of analyzing and interpreting these cardiologic databases.

  1. NCBI GEO: mining tens of millions of expression profiles--database and tools update.

    PubMed

    Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Edgar, Ron

    2007-01-01

    The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely disseminates microarray and other forms of high-throughput data generated by the scientific community. The database has a minimum information about a microarray experiment (MIAME)-compliant infrastructure that captures fully annotated raw and processed data. Several data deposit options and formats are supported, including web forms, spreadsheets, XML and Simple Omnibus Format in Text (SOFT). In addition to data storage, a collection of user-friendly web-based interfaces and applications are available to help users effectively explore, visualize and download the thousands of experiments and tens of millions of gene expression patterns stored in GEO. This paper provides a summary of the GEO database structure and user facilities, and describes recent enhancements to database design, performance, submission format options, data query and retrieval utilities. GEO is accessible at http://www.ncbi.nlm.nih.gov/geo/

  2. Fracture toughness testing on ferritic alloys using the electropotential technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, F.H.; Wire, G.L.

    1981-06-11

    Fracture toughness measurements as done conventionally require large specimens (5 x 5 x 2.5 cm) which would be prohibitively expensive to irradiate over the fluence and temperature ranges required for first wall design. To overcome this difficulty a single specimen technique for J intergral fracture toughness measurements on miniature specimens (1.6 cm OD x 0.25 cm thick) was developed. Comparisons with specimens three times as thick show that the derived J/sub 1c/ is constant, validating the specimen for first wall applications. The electropotential technique was used to obtain continuous crack extension measurements, allowing a ductile fracture resistence curve to bemore » constructed from a single specimen. The irradiation test volume required for fracture toughness measurements using both miniature specimens and single specimen J measurements was reduced a factor of 320, making it possible to perform a systematic exploration of irradiation temperature and dose variables as required for qualification of HT-9 and 9Cr-1Mo base metal and welds for first wall application. Fracture toughness test results for HT-9 and 9Cr-1Mo from 25 to 539/sup 0/C are presented to illustrate the single specimen technique.« less

  3. Controllable growth of polyaniline nanowire arrays on hierarchical macro/mesoporous graphene foams for high-performance flexible supercapacitors

    NASA Astrophysics Data System (ADS)

    Yu, Pingping; Zhao, Xin; Li, Yingzhi; Zhang, Qinghua

    2017-01-01

    Free-standing hierarchical macro/mesoporous flexible graphene foam have been constructed by rational intergration ofwell dispersed graphene oxide sheets and amino-modified polystyrene (PS) spheres through a facile ;templating and embossing; technique. The three dimensional (3D) macro/mesoporous flexible graphene foam not only inherits the uniform porous structures of graphene foam, but also contains hierarchical macro/mesopores on the struts by sacrificing PS spheres and the activation of KOH, which could providing rapid pathways for ionic and electronic transport to high specific capacitance. Vertically polyaniline (PANI) nanowire arrays are then uniformly deposited onto the hierarchical macro/mesoporous graphene foam(fRGO-F/PANI) by a simple in situ polymerization, which show a high specific capacitance of 939 F g-1. Thanks to the synergistic function of 3D bicontinuous hierarchical porous structure of graphene foam and effective immobilization of PANI nanowires on the struts, the assembled symmetric supercapctior with fRGO-F/PANI as electrodes exhibits a maximum energy density and power density of 20.9 Wh kg-1 and 103.2 kW kg-1, respectively. Moreover, it also displays an excellent cyclic stability with a 88.7% retention after 5000 cycles.

  4. An in-situ synthesis of Ag/AgCl/TiO2/hierarchical porous magnesian material and its photocatalytic performance

    PubMed Central

    Yang, Lu; Wang, Fazhou; Shu, Chang; Liu, Peng; Zhang, Wenqin; Hu, Shuguang

    2016-01-01

    The absorption ability and photocatalytic activity of photocatalytic materials play important roles in improving the pollutants removal effects. Herein, we reported a new kind of photocatalytic material, which was synthesized by simultaneously designing hierarchical porous magnesian (PM) substrate and TiO2 catalyst modification. Particularly, PM substrate could be facilely prepared by controlling its crystal phase (Phase 5, Mg3Cl(OH)5·4H2O), while Ag/AgCl particles modification of TiO2 could be achieved by in situ ion exchange between Ag+ and above crystal Phase. Physiochemical analysis shows that Ag/AgCl/TiO2/PM material has higher visible and ultraviolet light absorption response, and excellent gas absorption performance compared to other controls. These suggested that Ag/AgCl/TiO2/PM material could produce more efficient photocatalytic effects. Its photocatalytic reaction rate was 5.21 and 30.57 times higher than that of TiO2/PM and TiO2/imporous magnesian substrate, respectively. Thus, this material and its intergration synthesis method could provide a novel strategy for high-efficiency application and modification of TiO2 photocatalyst in engineering filed. PMID:26883972

  5. Crosstalk between intracellular and extracellular signals regulating interneuron production, migration and integration into the cortex

    PubMed Central

    Peyre, Elise; Silva, Carla G.; Nguyen, Laurent

    2015-01-01

    During embryogenesis, cortical interneurons are generated by ventral progenitors located in the ganglionic eminences of the telencephalon. They travel along multiple tangential paths to populate the cortical wall. As they reach this structure they undergo intracortical dispersion to settle in their final destination. At the cellular level, migrating interneurons are highly polarized cells that extend and retract processes using dynamic remodeling of microtubule and actin cytoskeleton. Different levels of molecular regulation contribute to interneuron migration. These include: (1) Extrinsic guidance cues distributed along migratory streams that are sensed and integrated by migrating interneurons; (2) Intrinsic genetic programs driven by specific transcription factors that grant specification and set the timing of migration for different subtypes of interneurons; (3) Adhesion molecules and cytoskeletal elements/regulators that transduce molecular signalings into coherent movement. These levels of molecular regulation must be properly integrated by interneurons to allow their migration in the cortex. The aim of this review is to summarize our current knowledge of the interplay between microenvironmental signals and cell autonomous programs that drive cortical interneuron porduction, tangential migration, and intergration in the developing cerebral cortex. PMID:25926769

  6. Promoting North-South partnership in space data use and applications: Case study - East African countries space programs/projects new- concepts in document management

    NASA Astrophysics Data System (ADS)

    Mlimandago, S.

    This research paper have gone out with very simple and easy (several) new concepts in document management for space projects and programs which can be applied anywhere both in the developing and developed countries. These several new concepts are and have been applied in Tanzania, Kenya and Uganda and found out to bear very good results using simple procedures. The intergral project based its documentation management approach from the outset on electronic document sharing and archiving. The main objective of having new concepts was to provide a faster and wider availability of the most current space information to all parties rather than creating a paperless office. Implementation of the new concepts approach required the capturing of documents in an appropriate and simple electronic format at source establishing new procedures for project wide information sharing and the deployment of a new generation of simple procedure - WEB - based tools. Key success factors were the early adoption of Internet technologies and simple procedures for improved information flow new concepts which can be applied anywhere both in the developed and the developing countries.

  7. Construction of the Database for Pulsating Variable Stars

    NASA Astrophysics Data System (ADS)

    Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei

    2012-01-01

    A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.

  8. RISE: a database of RNA interactome from sequencing experiments

    PubMed Central

    Gong, Jing; Shao, Di; Xu, Kui

    2018-01-01

    Abstract We present RISE (http://rise.zhanglab.net), a database of RNA Interactome from Sequencing Experiments. RNA-RNA interactions (RRIs) are essential for RNA regulation and function. RISE provides a comprehensive collection of RRIs that mainly come from recent transcriptome-wide sequencing-based experiments like PARIS, SPLASH, LIGR-seq, and MARIO, as well as targeted studies like RIA-seq, RAP-RNA and CLASH. It also includes interactions aggregated from other primary databases and publications. The RISE database currently contains 328,811 RNA-RNA interactions mainly in human, mouse and yeast. While most existing RNA databases mainly contain interactions of miRNA targeting, notably, more than half of the RRIs in RISE are among mRNA and long non-coding RNAs. We compared different RRI datasets in RISE and found limited overlaps in interactions resolved by different techniques and in different cell lines. It may suggest technology preference and also dynamic natures of RRIs. We also analyzed the basic features of the human and mouse RRI networks and found that they tend to be scale-free, small-world, hierarchical and modular. The analysis may nominate important RNAs or RRIs for further investigation. Finally, RISE provides a Circos plot and several table views for integrative visualization, with extensive molecular and functional annotations to facilitate exploration of biological functions for any RRI of interest. PMID:29040625

  9. CDM analysis

    NASA Technical Reports Server (NTRS)

    Larson, Robert E.; Mcentire, Paul L.; Oreilly, John G.

    1993-01-01

    The C Data Manager (CDM) is an advanced tool for creating an object-oriented database and for processing queries related to objects stored in that database. The CDM source code was purchased and will be modified over the course of the Arachnid project. In this report, the modified CDM is referred to as MCDM. Using MCDM, a detailed series of experiments was designed and conducted on a Sun Sparcstation. The primary results and analysis of the CDM experiment are provided in this report. The experiments involved creating the Long-form Faint Source Catalog (LFSC) database and then analyzing it with respect to following: (1) the relationships between the volume of data and the time required to create a database; (2) the storage requirements of the database files; and (3) the properties of query algorithms. The effort focused on defining, implementing, and analyzing seven experimental scenarios: (1) find all sources by right ascension--RA; (2) find all sources by declination--DEC; (3) find all sources in the right ascension interval--RA1, RA2; (4) find all sources in the declination interval--DEC1, DEC2; (5) find all sources in the rectangle defined by--RA1, RA2, DEC1, DEC2; (6) find all sources that meet certain compound conditions; and (7) analyze a variety of query algorithms. Throughout this document, the numerical results obtained from these scenarios are reported; conclusions are presented at the end of the document.

  10. COMBREX-DB: an experiment centered database of protein function: knowledge, predictions and knowledge gaps.

    PubMed

    Chang, Yi-Chien; Hu, Zhenjun; Rachlin, John; Anton, Brian P; Kasif, Simon; Roberts, Richard J; Steffen, Martin

    2016-01-04

    The COMBREX database (COMBREX-DB; combrex.bu.edu) is an online repository of information related to (i) experimentally determined protein function, (ii) predicted protein function, (iii) relationships among proteins of unknown function and various types of experimental data, including molecular function, protein structure, and associated phenotypes. The database was created as part of the novel COMBREX (COMputational BRidges to EXperiments) effort aimed at accelerating the rate of gene function validation. It currently holds information on ∼ 3.3 million known and predicted proteins from over 1000 completely sequenced bacterial and archaeal genomes. The database also contains a prototype recommendation system for helping users identify those proteins whose experimental determination of function would be most informative for predicting function for other proteins within protein families. The emphasis on documenting experimental evidence for function predictions, and the prioritization of uncharacterized proteins for experimental testing distinguish COMBREX from other publicly available microbial genomics resources. This article describes updates to COMBREX-DB since an initial description in the 2011 NAR Database Issue. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. The Starlite Project

    DTIC Science & Technology

    1990-09-01

    conflicts. The current prototyping tool also provides a multiversion data object control mechanism. From a series of experiments, we found that the...performance of a multiversion distributed database system is quite sensitive to the size of read-sets and write-sets of transactions. A multiversion database...510-512. (18) Son, S. H. and N. Haghighi, "Performance Evaluation of Multiversion Database Systems," Sixth IEEE International Conference on Data

  12. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  13. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    PubMed

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  14. Review and assessment of turbulence models for hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roy, Christopher J.; Blottner, Frederick G.

    2006-10-01

    Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.

  15. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  16. Third chronological supplement to the carcinogenic potency database: standardized results of animal bioassays published through December 1986 and by the National Toxicology Program through June 1987.

    PubMed Central

    Gold, L S; Slone, T H; Backman, G M; Eisenberg, S; Da Costa, M; Wong, M; Manley, N B; Rohrbach, L; Ames, B N

    1990-01-01

    This paper is the third chronological supplement to the Carcinogenic Potency Database that first appeared in this journal in 1984. We report here results of carcinogenesis bioassays published in the general literature between January 1985 and December 1986, and in Technical Reports of the National Toxicology Program between June 1986 and June 1987. This supplement includes results of 337 long-term, chronic experiments of 121 compounds, and reports the same information about each experiment in the same plot format as the earlier papers, e.g., the species and strain of animal, the route and duration of compound administration, dose level, and other aspects of experimental protocol, histopathology, and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, opinion of the author about carcinogenicity, and literature citation. The reader needs to refer to the 1984 publication for a guide to the plot of the database, a complete description of the numerical index of carcinogenic potency, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The four plots of the database are to be used together as results published earlier are not repeated. In all, the four plots include results for approximately 4000 experiments on 1050 chemicals. Appendix 14 of this paper is an alphabetical index to all chemicals in the database and indicates which plot(s) each chemical appears in. A combined plot of all results from the four separate papers, that is ordered alphabetically by chemical, is available from the first author, in printed form or on computer tape or diskette. PMID:2351123

  17. Initial experiences with building a health care infrastructure based on Java and object-oriented database technology.

    PubMed

    Dionisio, J D; Sinha, U; Dai, B; Johnson, D B; Taira, R K

    1999-01-01

    A multi-tiered telemedicine system based on Java and object-oriented database technology has yielded a number of practical insights and experiences on their effectiveness and suitability as implementation bases for a health care infrastructure. The advantages and drawbacks to their use, as seen within the context of the telemedicine system's development, are discussed. Overall, these technologies deliver on their early promise, with a few remaining issues that are due primarily to their relative newness.

  18. Integration of Narrative Processing, Data Fusion, and Database Updating Techniques in an Automated System.

    DTIC Science & Technology

    1981-10-29

    are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie

  19. The HUPO PSI's molecular interaction format--a community standard for the representation of protein interaction data.

    PubMed

    Hermjakob, Henning; Montecchi-Palazzi, Luisa; Bader, Gary; Wojcik, Jérôme; Salwinski, Lukasz; Ceol, Arnaud; Moore, Susan; Orchard, Sandra; Sarkans, Ugis; von Mering, Christian; Roechert, Bernd; Poux, Sylvain; Jung, Eva; Mersch, Henning; Kersey, Paul; Lappe, Michael; Li, Yixue; Zeng, Rong; Rana, Debashis; Nikolski, Macha; Husi, Holger; Brun, Christine; Shanker, K; Grant, Seth G N; Sander, Chris; Bork, Peer; Zhu, Weimin; Pandey, Akhilesh; Brazma, Alvis; Jacq, Bernard; Vidal, Marc; Sherman, David; Legrain, Pierre; Cesareni, Gianni; Xenarios, Ioannis; Eisenberg, David; Steipe, Boris; Hogue, Chris; Apweiler, Rolf

    2004-02-01

    A major goal of proteomics is the complete description of the protein interaction network underlying cell physiology. A large number of small scale and, more recently, large-scale experiments have contributed to expanding our understanding of the nature of the interaction network. However, the necessary data integration across experiments is currently hampered by the fragmentation of publicly available protein interaction data, which exists in different formats in databases, on authors' websites or sometimes only in print publications. Here, we propose a community standard data model for the representation and exchange of protein interaction data. This data model has been jointly developed by members of the Proteomics Standards Initiative (PSI), a work group of the Human Proteome Organization (HUPO), and is supported by major protein interaction data providers, in particular the Biomolecular Interaction Network Database (BIND), Cellzome (Heidelberg, Germany), the Database of Interacting Proteins (DIP), Dana Farber Cancer Institute (Boston, MA, USA), the Human Protein Reference Database (HPRD), Hybrigenics (Paris, France), the European Bioinformatics Institute's (EMBL-EBI, Hinxton, UK) IntAct, the Molecular Interactions (MINT, Rome, Italy) database, the Protein-Protein Interaction Database (PPID, Edinburgh, UK) and the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING, EMBL, Heidelberg, Germany).

  20. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  1. Database of proposed payloads and instruments for SEI missions

    NASA Technical Reports Server (NTRS)

    Barlow, N. G.

    1992-01-01

    A database of all payloads and instruments proposed for lunar and Mars missions was compiled by the author for the Exploration Programs Office at NASA's Johnson Sapce Center. The database is an outgrowth of the document produced by C. J. Budney et al. at the Jet Propulsion Laboratory in 1991. The present database consists not only of payloads proposed for human exploratory missions of the Moon and Mars, but also experiments selected or proposed for robotic precursor missions such as Lunar Scout, Mars Observer, and MESUR. The database consists of two parts: a written payload description and a matrix that provides a breakdown of payload components. Each payload description consists of the following information: (1) the rationale for why the instrument or payload package is being proposed for operation on the Moon or Mars; (2) a description of how the instrument works; (3) a breakdown of the payload, providing detailed information about the mass, volume, power requirements, and data rates for the constituent pieces of the experiment; (4) estimates of the power consumption and data rate; (5) how the data will be returned to Earth and distributed to the scientific community; (6) any constraints on the location or conditions under which the instrument can or cannot operate; (7) what type of crew interaction (if any) is needed; (8) how the payload is to be delivered to the lunar or martian surface (along with alternative delivery options); (9) how long the instrument or payload package will take to set up; (10) what type of maintenance needs are anticipated for the experiment; (11) stage of development for the instrument and environmental conditions under which the instrument has been tested; (12) an interface required by the instrument with the lander, a rover, an outpost, etc.; (13) information about how often the experiment will need to be resupplied with parts or consumables, if it is to be resupplied; (14) the name and affiliation of a contact person for the experiment; and (15) references where further information about the experiment can be found.

  2. MARC and Relational Databases.

    ERIC Educational Resources Information Center

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  3. Kristin Munch | NREL

    Science.gov Websites

    Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab

  4. GE781: a Monte Carlo package for fixed target experiments

    NASA Astrophysics Data System (ADS)

    Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.

    The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.

  5. [EXPERIENCE IN THE APPLICATION OF DATABASES ON BLOODSUCKING INSECTS IN ZOOLOGICAL STUDIES].

    PubMed

    Medvedev, S G; Khalikov, R G

    2016-01-01

    The paper summarizes long-term experience of accumulating and summarizing the faunistic information by means of separate databases (DB) and information analytical systems (IAS), and also prospects of its representation by modern multi-user informational systems. The experience obtained during development and practical use of the PARHOST1 IAS for the study of the world flea fauna and work with personal databases created for the study of bloodsucking insects (lice and blackflies) is analyzed. Research collection material on type series of 57 species and subspecies of fleas of the fauna of Russia was approved as a part of multi-user information retrieval system on the web-portal of the Zoological Institute of the Russian Academy of Sciences. According former investigations, the system allows depositing the information in the authentic form and performing its gradual transformation, i. e. its unification and structuring. In order to provide continuity of DB refill, the possibility of work of operators with different degree of competence is provided.

  6. Monitoring product safety in the postmarketing environment.

    PubMed

    Sharrar, Robert G; Dieck, Gretchen S

    2013-10-01

    The safety profile of a medicinal product may change in the postmarketing environment. Safety issues not identified in clinical development may be seen and need to be evaluated. Methods of evaluating spontaneous adverse experience reports and identifying new safety risks include a review of individual reports, a review of a frequency distribution of a list of the adverse experiences, the development and analysis of a case series, and various ways of examining the database for signals of disproportionality, which may suggest a possible association. Regulatory agencies monitor product safety through a variety of mechanisms including signal detection of the adverse experience safety reports in databases and by requiring and monitoring risk management plans, periodic safety update reports and postauthorization safety studies. The United States Food and Drug Administration is working with public, academic and private entities to develop methods for using large electronic databases to actively monitor product safety. Important identified risks will have to be evaluated through observational studies and registries.

  7. Geographical and temporal distribution of basic research experiments in homeopathy.

    PubMed

    Clausen, Jürgen; van Wijk, Roeland; Albrecht, Henning

    2014-07-01

    The database HomBRex (Homeopathy Basic Research experiments) was established in 2002 to provide an overview of the basic research already done on homeopathy (http://www.carstens-stiftung.de/hombrex). By this means, it facilitates the exploration of the Similia Principle and the working mechanism of homeopathy. Since 2002, the total number of experiments listed has almost doubled. The current review reports the history of basic research in homeopathy as evidenced by publication dates and origin of publications. In July 2013, the database held 1868 entries. Most publications were reported from France (n = 267), followed by Germany (n = 246) and India (n = 237). In the last ten years, the number of publications from Brazil dramatically increased from n = 13 (before 2004) to n = 164 (compared to n = 251 published in France before 2004, and n = 16 between 2004 and 2013). The oldest database entry was from Germany (1832). Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  8. RESIS-II: An Updated Version of the Original Reservoir Sedimentation Survey Information System (RESIS) Database

    USGS Publications Warehouse

    Ackerman, Katherine V.; Mixon, David M.; Sundquist, Eric T.; Stallard, Robert F.; Schwarz, Gregory E.; Stewart, David W.

    2009-01-01

    The Reservoir Sedimentation Survey Information System (RESIS) database, originally compiled by the Soil Conservation Service (now the Natural Resources Conservation Service) in collaboration with the Texas Agricultural Experiment Station, is the most comprehensive compilation of data from reservoir sedimentation surveys throughout the conterminous United States (U.S.). The database is a cumulative historical archive that includes data from as early as 1755 and as late as 1993. The 1,823 reservoirs included in the database range in size from farm ponds to the largest U.S. reservoirs (such as Lake Mead). Results from 6,617 bathymetric surveys are available in the database. This Data Series provides an improved version of the original RESIS database, termed RESIS-II, and a report describing RESIS-II. The RESIS-II relational database is stored in Microsoft Access and includes more precise location coordinates for most of the reservoirs than the original database but excludes information on reservoir ownership. RESIS-II is anticipated to be a template for further improvements in the database.

  9. NCBI GEO: mining millions of expression profiles--database and tools.

    PubMed

    Barrett, Tanya; Suzek, Tugba O; Troup, Dennis B; Wilhite, Stephen E; Ngau, Wing-Chi; Ledoux, Pierre; Rudnev, Dmitry; Lash, Alex E; Fujibuchi, Wataru; Edgar, Ron

    2005-01-01

    The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest fully public repository for high-throughput molecular abundance data, primarily gene expression data. The database has a flexible and open design that allows the submission, storage and retrieval of many data types. These data include microarray-based experiments measuring the abundance of mRNA, genomic DNA and protein molecules, as well as non-array-based technologies such as serial analysis of gene expression (SAGE) and mass spectrometry proteomic technology. GEO currently holds over 30,000 submissions representing approximately half a billion individual molecular abundance measurements, for over 100 organisms. Here, we describe recent database developments that facilitate effective mining and visualization of these data. Features are provided to examine data from both experiment- and gene-centric perspectives using user-friendly Web-based interfaces accessible to those without computational or microarray-related analytical expertise. The GEO database is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

  10. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  11. Risk and Resiliency for Dementia: Comparison of Male and Female Veterans

    DTIC Science & Technology

    2017-09-01

    from the Veterans Health Administration (VHA) National Patient Care Database (NPCD) 2. Obtain data from the Veterans Health Administration (VHA...National Patient Care Database (NPCD): Months 6-12  In the second quarter, we submitted and received approval to receive data from the VHA NPCD  In...injury. We plan to capitalize on our prior experience working with the Veterans Health Administration National Patient Care Database . We will use data

  12. [Design and establishment of modern literature database about acupuncture Deqi].

    PubMed

    Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao

    2015-02-01

    A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.

  13. [The future of clinical laboratory database management system].

    PubMed

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  14. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  15. Using an international p53 mutation database as a foundation for an online laboratory in an upper level undergraduate biology class.

    PubMed

    Melloy, Patricia G

    2015-01-01

    A two-part laboratory exercise was developed to enhance classroom instruction on the significance of p53 mutations in cancer development. Students were asked to mine key information from an international database of p53 genetic changes related to cancer, the IARC TP53 database. Using this database, students designed several data mining activities to look at the changes in the p53 gene from a number of perspectives, including potential cancer-causing agents leading to particular changes and the prevalence of certain p53 variations in certain cancers. In addition, students gained a global perspective on cancer prevalence in different parts of the world. Students learned how to use the database in the first part of the exercise, and then used that knowledge to search particular cancers and cancer-causing agents of their choosing in the second part of the exercise. Students also connected the information gathered from the p53 exercise to a previous laboratory exercise looking at risk factors for cancer development. The goal of the experience was to increase student knowledge of the link between p53 genetic variation and cancer. Students also were able to walk a similar path through the website as a cancer researcher using the database to enhance bench work-based experiments with complementary large-scale database p53 variation information. © 2014 The International Union of Biochemistry and Molecular Biology.

  16. Introducing the CPL/MUW proteome database: interpretation of human liver and liver cancer proteome profiles by referring to isolated primary cells.

    PubMed

    Wimmer, Helge; Gundacker, Nina C; Griss, Johannes; Haudek, Verena J; Stättner, Stefan; Mohr, Thomas; Zwickl, Hannes; Paulitschke, Verena; Baron, David M; Trittner, Wolfgang; Kubicek, Markus; Bayer, Editha; Slany, Astrid; Gerner, Christopher

    2009-06-01

    Interpretation of proteome data with a focus on biomarker discovery largely relies on comparative proteome analyses. Here, we introduce a database-assisted interpretation strategy based on proteome profiles of primary cells. Both 2-D-PAGE and shotgun proteomics are applied. We obtain high data concordance with these two different techniques. When applying mass analysis of tryptic spot digests from 2-D gels of cytoplasmic fractions, we typically identify several hundred proteins. Using the same protein fractions, we usually identify more than thousand proteins by shotgun proteomics. The data consistency obtained when comparing these independent data sets exceeds 99% of the proteins identified in the 2-D gels. Many characteristic differences in protein expression of different cells can thus be independently confirmed. Our self-designed SQL database (CPL/MUW - database of the Clinical Proteomics Laboratories at the Medical University of Vienna accessible via www.meduniwien.ac.at/proteomics/database) facilitates (i) quality management of protein identification data, which are based on MS, (ii) the detection of cell type-specific proteins and (iii) of molecular signatures of specific functional cell states. Here, we demonstrate, how the interpretation of proteome profiles obtained from human liver tissue and hepatocellular carcinoma tissue is assisted by the Clinical Proteomics Laboratories at the Medical University of Vienna-database. Therefore, we suggest that the use of reference experiments supported by a tailored database may substantially facilitate data interpretation of proteome profiling experiments.

  17. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  18. Data bases for forest inventory in the North-Central Region.

    Treesearch

    Jerold T. Hahn; Mark H. Hansen

    1985-01-01

    Describes the data collected by the Forest Inventory and Analysis (FIA) Research Work Unit at the North Central Forest Experiment Station. Explains how interested parties may obtain information from the databases either through direct access or by special requests to the FIA database manager.

  19. Storing Data from Qweak--A Precision Measurement of the Proton's Weak Charge

    NASA Astrophysics Data System (ADS)

    Pote, Timothy

    2008-10-01

    The Qweak experiment will perform a precision measurement of the proton's parity violating weak charge at low Q-squared. The experiment will do so by measuring the asymmetry in parity-violating electron scattering. The proton's weak charge is directly related to the value of the weak mixing angle--a fundamental quantity in the Standard Model. The Standard Model makes a firm prediction for the value of the weak mixing angle and thus Qweak may provide insight into shortcomings in the SM. The Qweak experiment will run at Thomas Jefferson National Accelerator Facility in Newport News, VA. A database was designed to hold data directly related to the measurement of the proton's weak charge such as detector and beam monitor yield, asymmetry, and error as well as control structures such as the voltage across photomultiplier tubes and the temperature of the liquid hydrogen target. In order to test the database for speed and stability, it was filled with fake data that mimicked the data that Qweak is expected to collect. I will give a brief overview of the Qweak experiment and database design, and present data collected during these tests.

  20. Monitoring of small laboratory animal experiments by a designated web-based database.

    PubMed

    Frenzel, T; Grohmann, C; Schumacher, U; Krüll, A

    2015-10-01

    Multiple-parametric small animal experiments require, by their very nature, a sufficient number of animals which may need to be large to obtain statistically significant results.(1) For this reason database-related systems are required to collect the experimental data as well as to support the later (re-) analysis of the information gained during the experiments. In particular, the monitoring of animal welfare is simplified by the inclusion of warning signals (for instance, loss in body weight >20%). Digital patient charts have been developed for human patients but are usually not able to fulfill the specific needs of animal experimentation. To address this problem a unique web-based monitoring system using standard MySQL, PHP, and nginx has been created. PHP was used to create the HTML-based user interface and outputs in a variety of proprietary file formats, namely portable document format (PDF) or spreadsheet files. This article demonstrates its fundamental features and the easy and secure access it offers to the data from any place using a web browser. This information will help other researchers create their own individual databases in a similar way. The use of QR-codes plays an important role for stress-free use of the database. We demonstrate a way to easily identify all animals and samples and data collected during the experiments. Specific ways to record animal irradiations and chemotherapy applications are shown. This new analysis tool allows the effective and detailed analysis of huge amounts of data collected through small animal experiments. It supports proper statistical evaluation of the data and provides excellent retrievable data storage. © The Author(s) 2015.

  1. The fifth plot of the Carcinogenic Potency Database: results of animal bioassays published in the general literature through 1988 and by the National Toxicology Program through 1989.

    PubMed Central

    Gold, L S; Manley, N B; Slone, T H; Garfinkel, G B; Rohrbach, L; Ames, B N

    1993-01-01

    This paper is the fifth plot of the Carcinogenic Potency Database (CPDB) that first appeared in this journal in 1984 (1-5). We report here results of carcinogenesis bioassays published in the general literature between January 1987 and December 1988, and in technical reports of the National Toxicology Program between July 1987 and December 1989. This supplement includes results of 412 long-term, chronic experiments of 147 test compounds and reports the same information about each experiment in the same plot format as the earlier papers: the species and strain of test animal, the route and duration of compound administration, dose level and other aspects of experimental protocol, histopathology and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, author's opinion about carcinogenicity, and literature citation. We refer the reader to the 1984 publications (1,5,6) for a guide to the plot of the database, a complete description of the numerical index of carcinogenic potency, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The five plots of the database are to be used together, as results of individual experiments that were published earlier are not repeated. In all, the five plots include results of 4487 experiments on 1136 chemicals. Several analyses based on the CPDB that were published earlier are described briefly, and updated results based on all five plots are given for the following earlier analyses: the most potent TD50 value by species, reproducibility of bioassay results, positivity rates, and prediction between species.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8354183

  2. Experiment Management System for the SND Detector

    NASA Astrophysics Data System (ADS)

    Pugachev, K.

    2017-10-01

    We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.

  3. CyanoBase: the cyanobacteria genome database update 2010.

    PubMed

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  4. Cleaning Data Helps Clean the Air

    ERIC Educational Resources Information Center

    Donalds, Kelley; Liu, Xiangrong

    2014-01-01

    In this project, students use a real-world, complex database and experience firsthand the consequences of inadequate data modeling. The U.S. Environmental Protection Agency created the database as part of a multimillion dollar data collection effort undertaken in order to set limits on air pollutants from electric power plants. First, students…

  5. High Throughput Experimental Materials Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Perkins, John; Schwarting, Marcus

    The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).

  6. Developing and Refining the Taiwan Birth Cohort Study (TBCS): Five Years of Experience

    ERIC Educational Resources Information Center

    Lung, For-Wey; Chiang, Tung-Liang; Lin, Shio-Jean; Shu, Bih-Ching; Lee, Meng-Chih

    2011-01-01

    The Taiwan Birth Cohort Study (TBCS) is the first nationwide birth cohort database in Asia designed to establish national norms of children's development. Several challenges during database development and data analysis were identified. Challenges include sampling methods, instrument development and statistical approach to missing data. The…

  7. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Recovery Act: An Integrated Experimental and Numerical Study: Developing a Reaction Transport Model that Couples Chemical Reactions of Mineral Dissolution/Precipitation with Spatial and Temporal Flow Variations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.

    2016-06-24

    A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less

  9. 2DB: a Proteomics database for storage, analysis, presentation, and retrieval of information from mass spectrometric experiments.

    PubMed

    Allmer, Jens; Kuhlgert, Sebastian; Hippler, Michael

    2008-07-07

    The amount of information stemming from proteomics experiments involving (multi dimensional) separation techniques, mass spectrometric analysis, and computational analysis is ever-increasing. Data from such an experimental workflow needs to be captured, related and analyzed. Biological experiments within this scope produce heterogenic data ranging from pictures of one or two-dimensional protein maps and spectra recorded by tandem mass spectrometry to text-based identifications made by algorithms which analyze these spectra. Additionally, peptide and corresponding protein information needs to be displayed. In order to handle the large amount of data from computational processing of mass spectrometric experiments, automatic import scripts are available and the necessity for manual input to the database has been minimized. Information is in a generic format which abstracts from specific software tools typically used in such an experimental workflow. The software is therefore capable of storing and cross analysing results from many algorithms. A novel feature and a focus of this database is to facilitate protein identification by using peptides identified from mass spectrometry and link this information directly to respective protein maps. Additionally, our application employs spectral counting for quantitative presentation of the data. All information can be linked to hot spots on images to place the results into an experimental context. A summary of identified proteins, containing all relevant information per hot spot, is automatically generated, usually upon either a change in the underlying protein models or due to newly imported identifications. The supporting information for this report can be accessed in multiple ways using the user interface provided by the application. We present a proteomics database which aims to greatly reduce evaluation time of results from mass spectrometric experiments and enhance result quality by allowing consistent data handling. Import functionality, automatic protein detection, and summary creation act together to facilitate data analysis. In addition, supporting information for these findings is readily accessible via the graphical user interface provided. The database schema and the implementation, which can easily be installed on virtually any server, can be downloaded in the form of a compressed file from our project webpage.

  10. [Establishement for regional pelvic trauma database in Hunan Province].

    PubMed

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  11. RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.

    PubMed

    Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor

    2013-07-01

    This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Experiences in the creation of an electromyography database to help hand amputated persons.

    PubMed

    Atzori, Manfredo; Gijsberts, Arjan; Heynen, Simone; Hager, Anne-Gabrielle Mittaz; Castellimi, Claudio; Caputo, Barbara; Müller, Henning

    2012-01-01

    Currently, trans-radial amputees can only perform a few simple movements with prosthetic hands. This is mainly due to low control capabilities and the long training time that is required to learn controlling them with surface electromyography (sEMG). This is in contrast with recent advances in mechatronics, thanks to which mechanical hands have multiple degrees of freedom and in some cases force control. To help improve the situation, we are building the NinaPro (Non-Invasive Adaptive Prosthetics) database, a database of about 50 hand and wrist movements recorded from several healthy and currently very few amputated persons that will help the community to test and improve sEMG-based natural control systems for prosthetic hands. In this paper we describe the experimental experiences and practical aspects related to the data acquisition.

  13. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  14. CyanoBase: the cyanobacteria genome database update 2010

    PubMed Central

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly. PMID:19880388

  15. Phased array-fed antenna configuration study

    NASA Technical Reports Server (NTRS)

    Crosswell, W. F.; Ball, D. E.; Taylor, R. C.

    1983-01-01

    The scope of this contract entails a configuration study for a phased array fed transmit antenna operating in the frequency band of 17.7 to 20.2 GHz. This initial contract provides a basis for understanding the design limitations and advantages of advanced phased array and cluster feeds (both utilizing intergral MMIC modules) illuminating folded reflector optics (both near field and focused types). Design parametric analyses are performed utilizing as constraints the objective secondary performance requirements of the Advanced Communications Technology Satellite (Table 1.0). The output of the study provides design information which serves as a data base for future active phased array fed antenna studies such as detailed designs required to support the development of a ground tested breadboard. In general, this study is significant because it provides the antenna community with an understanding of the basic principles which govern near field phased scanned feed effects on secondary reflector system performance. Although several articles have been written on analysis procedures and results for these systems, the authors of this report have observed phenomenon of near field antenna systems not previously documented. Because the physical justification for the exhibited performance is provided herein, the findings of this study add a new dimension to the available knowledge of the subject matter.

  16. Research into the usage of integrated jamming of IR weakening and smoke-screen resisting the IR imaging guided missiles

    NASA Astrophysics Data System (ADS)

    Wang, Long-tao; Jiang, Ning; Lv, Ming-shan

    2015-10-01

    With the emergence of the anti-ship missle with the capability of infrared imaging guidance, the traditional single jamming measures, because of the jamming mechanism and technical flaws or unsuitable use, greatly reduced the survival probability of the war-ship in the future naval battle. Intergrated jamming of IR weakening + smoke-screen Can not only make jamming to the search and tracking of IR imaging guidance system , but also has feasibility in conjunction, besides , which also make the best jamming effect. The research conclusion has important realistic meaning for raising the antimissile ability of surface ships. With the development of guidance technology, infrared guidance system has expanded by ir point-source homing guidance to infrared imaging guidance, Infrared imaging guidance has made breakthrough progress, Infrared imaging guidance system can use two-dimensional infrared image information of the target, achieve the precise tracking. Which has Higher guidance precision, better concealment, stronger anti-interference ability and could Target the key parts. The traditional single infrared smoke screen jamming or infrared decoy flare interference cannot be imposed effective interference. So, Research how to effectively fight against infrared imaging guided weapons threat measures and means, improving the surface ship antimissile ability is an urgent need to solve.

  17. Intergrated metabonomic study of the effects of Guizhi Fuling capsule intervention on primary dysmenorrheal using RP-UPLC-MS complementary with HILIC-UPLC-MS technique.

    PubMed

    Lang, Lang; Meng, Zhaorui; Sun, Lan; Xiao, Wei; Zhao, Longshan; Xiong, Zhili

    2018-02-01

    Guizhi Fuling capsule (GFC), developed from the traditional Chinese prescription of Guizhi Fuling Wan, has been commonly used for the treatment of primary dysmenorrhea (PD). However, the intervention effective mechanism in vivo has not been well elucidated. In this study, an integrated plasma metabonomic strategy based on RP-UPLC-MS coupled with HILIC-UPLC-MS technique has been developed to investigate the global therapeutic effects and intervention mechanisms of GFC on dysmenorrhea rats induced by oxytocin. The 20 potential biomarkers were identified and primarily related to sphingolipid metabolism, steroid hormone biosynthesis, glycerophospholipid metabolism, amino acid metabolism, lipid metabolism and energy metabolism. The results showed that the GFC has therapeutic effects on rats with dysmenorrhea via the regulation of multiple metabolic pathways. Some new potential biomarkers associated with primary dysmenorrhea such as phenylalanine, tryptophan, taurine, carnitine, betaine, creatine and creatinine have been discovered in this study for the first time. This study provides a metabonomic platform based on RP-UPLC-MS complementary to HILIC-UPLC-MS technique to investigate both nonpolar and polar compounds, so as to get a more comprehensive metabolite information to yield insight into the pathophysiology of PD and assessing the efficacy of GFC on PD rats. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Exploring Genetic, Genomic, and Phenotypic Data at the Rat Genome Database

    PubMed Central

    Laulederkind, Stanley J. F.; Hayman, G. Thomas; Wang, Shur-Jen; Lowry, Timothy F.; Nigam, Rajni; Petri, Victoria; Smith, Jennifer R.; Dwinell, Melinda R.; Jacob, Howard J.; Shimoyama, Mary

    2013-01-01

    The laboratory rat, Rattus norvegicus, is an important model of human health and disease, and experimental findings in the rat have relevance to human physiology and disease. The Rat Genome Database (RGD, http://rgd.mcw.edu) is a model organism database that provides access to a wide variety of curated rat data including disease associations, phenotypes, pathways, molecular functions, biological processes and cellular components for genes, quantitative trait loci, and strains. We present an overview of the database followed by specific examples that can be used to gain experience in employing RGD to explore the wealth of functional data available for the rat. PMID:23255149

  19. LSE-Sign: A lexical database for Spanish Sign Language.

    PubMed

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  20. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  1. Identifying providers of care to individuals with human immunodeficiency virus for a mail survey using a prescription tracking database.

    PubMed

    Bach, P B; Calhoun, E A; Bennett, C L

    1999-02-01

    Unlike cancer and other illnesses for which specialists provide the majority of care for affected individuals, care of those infected with human immunodeficiency virus (HIV) is provided by generalists and many different types of specialists. To assess the utility of a prescription tracking database in identifying low experience and high-experience providers of such care regardless of specialty, we mailed a survey to 1500 physicians identified as having written prescriptions for agents used in care of HIV-infected individuals in the year before the survey. We discovered that physicians who care for patients with acquired immunodeficiency syndrome (AIDS) in the United States come from a broad range of specialties and practice in a variety of settings. Self-report of experience with AIDS care in the prior year was strongly associated with the number of HIV-related prescriptions identified in the tracking information. Response rates were consistent with those of other surveys published in medical journals. This study suggests that prescription tracking databases can be used to identify the breadth of physician/subjects who provide care for patients with HIV infection.

  2. The GraVent DDT database

    NASA Astrophysics Data System (ADS)

    Boeck, Lorenz R.; Katzy, Peter; Hasslberger, Josef; Kink, Andreas; Sattelmayer, Thomas

    2016-09-01

    An open-access online platform containing data from experiments on deflagration-to-detonation transition conducted at the Institute of Thermodynamics, Technical University of Munich, has been developed and is accessible at http://www.td.mw.tum.de/ddt. The database provides researchers working on explosion dynamics with data for theoretical analyses and for the validation of numerical simulations.

  3. Building a Faculty Publications Database: A Case Study

    ERIC Educational Resources Information Center

    Tabaei, Sara; Schaffer, Yitzchak; McMurray, Gregory; Simon, Bashe

    2013-01-01

    This case study shares the experience of building an in-house faculty publications database that was spearheaded by the Touro College and University System library in 2010. The project began with the intention of contributing to the college by collecting the research accomplishments of our faculty and staff, thereby also increasing library…

  4. Improving agricultural knowledge management: The AgTrials experience

    PubMed Central

    Hyman, Glenn; Espinosa, Herlin; Camargo, Paola; Abreu, David; Devare, Medha; Arnaud, Elizabeth; Porter, Cheryl; Mwanzia, Leroy; Sonder, Kai; Traore, Sibiry

    2017-01-01

    Background: Opportunities to use data and information to address challenges in international agricultural research and development are expanding rapidly. The use of agricultural trial and evaluation data has enormous potential to improve crops and management practices. However, for a number of reasons, this potential has yet to be realized. This paper reports on the experience of the AgTrials initiative, an effort to build an online database of agricultural trials applying principles of interoperability and open access. Methods: Our analysis evaluates what worked and what did not work in the development of the AgTrials information resource. We analyzed data on our users and their interaction with the platform. We also surveyed our users to gauge their perceptions of the utility of the online database. Results: The study revealed barriers to participation and impediments to interaction, opportunities for improving agricultural knowledge management and a large potential for the use of trial and evaluation data.  Conclusions: Technical and logistical mechanisms for developing interoperable online databases are well advanced.  More effort will be needed to advance organizational and institutional work for these types of databases to realize their potential. PMID:28580127

  5. Quantitative comparison of microarray experiments with published leukemia related gene expression signatures.

    PubMed

    Klein, Hans-Ulrich; Ruckert, Christian; Kohlmann, Alexander; Bullinger, Lars; Thiede, Christian; Haferlach, Torsten; Dugas, Martin

    2009-12-15

    Multiple gene expression signatures derived from microarray experiments have been published in the field of leukemia research. A comparison of these signatures with results from new experiments is useful for verification as well as for interpretation of the results obtained. Currently, the percentage of overlapping genes is frequently used to compare published gene signatures against a signature derived from a new experiment. However, it has been shown that the percentage of overlapping genes is of limited use for comparing two experiments due to the variability of gene signatures caused by different array platforms or assay-specific influencing parameters. Here, we present a robust approach for a systematic and quantitative comparison of published gene expression signatures with an exemplary query dataset. A database storing 138 leukemia-related published gene signatures was designed. Each gene signature was manually annotated with terms according to a leukemia-specific taxonomy. Two analysis steps are implemented to compare a new microarray dataset with the results from previous experiments stored and curated in the database. First, the global test method is applied to assess gene signatures and to constitute a ranking among them. In a subsequent analysis step, the focus is shifted from single gene signatures to chromosomal aberrations or molecular mutations as modeled in the taxonomy. Potentially interesting disease characteristics are detected based on the ranking of gene signatures associated with these aberrations stored in the database. Two example analyses are presented. An implementation of the approach is freely available as web-based application. The presented approach helps researchers to systematically integrate the knowledge derived from numerous microarray experiments into the analysis of a new dataset. By means of example leukemia datasets we demonstrate that this approach detects related experiments as well as related molecular mutations and may help to interpret new microarray data.

  6. Remote collection and analysis of witness reports on flash floods

    NASA Astrophysics Data System (ADS)

    Gourley, Jonathan; Erlingis, Jessica; Smith, Travis; Ortega, Kiel; Hong, Yang

    2010-05-01

    Typically, flash floods are studied ex post facto in response to a major impact event. A complement to field investigations is developing a detailed database of flash flood events, including minor events and null reports (i.e., where heavy rain occurred but there was no flash flooding), based on public survey questions conducted in near-real time. The Severe Hazards Analysis and Verification Experiment (SHAVE) has been in operation at the National Severe Storms Laboratory (NSSL) in Norman, OK, USA during the summers since 2006. The experiment employs undergraduate students to analyse real-time products from weather radars, target specific regions within the conterminous US, and poll public residences and businesses regarding the occurrence and severity of hail, wind, tornadoes, and now flash floods. In addition to providing a rich learning experience for students, SHAVE has been successful in creating high-resolution datasets of severe hazards used for algorithm and model verification. This talk describes the criteria used to initiate the flash flood survey, the specific questions asked and information entered to the database, and then provides an analysis of results for flash flood data collected during the summer of 2008. It is envisioned that specific details provided by the SHAVE flash flood observation database will complement databases collected by operational agencies and thus lead to better tools to predict the likelihood of flash floods and ultimately reduce their impacts on society.

  7. Long-term chemical carcinogenesis experiments for identifying potential human cancer hazards: collective database of the National Cancer Institute and National Toxicology Program (1976-1991).

    PubMed Central

    Huff, J; Haseman, J

    1991-01-01

    The carcinogenicity database used for this paper originated in the late 1960s by the National Cancer Institute (NCI) and since 1978 has been continued and made more comprehensive by the National Toxicology Program (NTP). The extensive files contain, among other sets of information, detailed pathology data on more than 400 long-term (most often 24-month) chemical carcinogenesis studies, comprising nearly 1600 individual experiments having at least 10 million tissue sections that have been evaluated for toxicity and carcinogenicity.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1820269

  8. Mammography status using patient self-reports and computerized radiology database.

    PubMed

    Thompson, B; Taylor, V; Goldberg, H; Mullen, M

    1999-10-01

    This study sought to compare self-reported mammography use of low-income women utilizing an inner-city public hospital with a computerized hospital database for tracking mammography use. A survey of all age-eligible women using the hospital's internal medicine clinic was done; responses were matched with the radiology database. We examined concordance among the two data sources. Concordance between self-report and the database was high (82%) when using "ever had a mammogram at the hospital," but low (58%) when comparing self-reported last mammogram with the information contained in the database. Disagreements existed between self-reports and the database. Because we sought to ensure that women would know exactly what a mammogram entailed by including a picture of a woman having a mammogram, it is possible that women's responses were accurate, leading to concerns that discrepancies might be present in the database. Physicians and staff must ensure that they understand the full history of a woman's experience with mammography before recommending for or against the procedure.

  9. Retrovirus Integration Database (RID): a public database for retroviral insertion sites into host genomes.

    PubMed

    Shao, Wei; Shan, Jigui; Kearney, Mary F; Wu, Xiaolin; Maldarelli, Frank; Mellors, John W; Luke, Brian; Coffin, John M; Hughes, Stephen H

    2016-07-04

    The NCI Retrovirus Integration Database is a MySql-based relational database created for storing and retrieving comprehensive information about retroviral integration sites, primarily, but not exclusively, HIV-1. The database is accessible to the public for submission or extraction of data originating from experiments aimed at collecting information related to retroviral integration sites including: the site of integration into the host genome, the virus family and subtype, the origin of the sample, gene exons/introns associated with integration, and proviral orientation. Information about the references from which the data were collected is also stored in the database. Tools are built into the website that can be used to map the integration sites to UCSC genome browser, to plot the integration site patterns on a chromosome, and to display provirus LTRs in their inserted genome sequence. The website is robust, user friendly, and allows users to query the database and analyze the data dynamically. https://rid.ncifcrf.gov ; or http://home.ncifcrf.gov/hivdrp/resources.htm .

  10. Examples of Use of SINBAD Database for Nuclear Data and Code Validation

    NASA Astrophysics Data System (ADS)

    Kodeli, Ivan; Žerovnik, Gašper; Milocco, Alberto

    2017-09-01

    The SINBAD database currently contains compilations and evaluations of over 100 shielding benchmark experiments. The SINBAD database is widely used for code and data validation. Materials covered include: Air, N. O, H2O, Al, Be, Cu, graphite, concrete, Fe, stainless steel, Pb, Li, Ni, Nb, SiC, Na, W, V and mixtures thereof. Over 40 organisations from 14 countries and 2 international organisations have contributed data and work in support of SINBAD. Examples of the use of the database in the scope of different international projects, such as the Working Party on Evaluation Cooperation of the OECD and the European Fusion Programme demonstrate the merit and possible usage of the database for the validation of modern nuclear data evaluations and new computer codes.

  11. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    NASA Astrophysics Data System (ADS)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are monitored with parsing intervals as fast as milliseconds. This software (server+agents+interface+database) comes in easy and ready-to-use packages that can be installed in any operating system, including Android and iOS systems. This software is ideal for use in modular experiments or monitoring stations with large variability in instruments and measuring methods or in large collaborations, where data requires homogenization in order to be effectively transmitted to all involved parties. This work presents the software and provides performance comparison with previously used monitoring systems in the CLOUD experiment at CERN.

  12. Establishment and Assessment of Plasma Disruption and Warning Databases from EAST

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Robert, Granetz; Xiao, Bingjia; Li, Jiangang; Yang, Fei; Li, Junjun; Chen, Dalong

    2016-12-01

    Disruption database and disruption warning database of the EAST tokamak had been established by a disruption research group. The disruption database, based on Structured Query Language (SQL), comprises 41 disruption parameters, which include current quench characteristics, EFIT equilibrium characteristics, kinetic parameters, halo currents, and vertical motion. Presently most disruption databases are based on plasma experiments of non-superconducting tokamak devices. The purposes of the EAST database are to find disruption characteristics and disruption statistics to the fully superconducting tokamak EAST, to elucidate the physics underlying tokamak disruptions, to explore the influence of disruption on superconducting magnets and to extrapolate toward future burning plasma devices. In order to quantitatively assess the usefulness of various plasma parameters for predicting disruptions, a similar SQL database to Alcator C-Mod for EAST has been created by compiling values for a number of proposed disruption-relevant parameters sampled from all plasma discharges in the 2015 campaign. The detailed statistic results and analysis of two databases on the EAST tokamak are presented. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2014GB103000)

  13. Validation of the 'United Registries for Clinical Assessment and Research' [UR-CARE], a European Online Registry for Clinical Care and Research in Inflammatory Bowel Disease.

    PubMed

    Burisch, Johan; Gisbert, Javier P; Siegmund, Britta; Bettenworth, Dominik; Thomsen, Sandra Bohn; Cleynen, Isabelle; Cremer, Anneline; Ding, Nik John Sheng; Furfaro, Federica; Galanopoulos, Michail; Grunert, Philip Christian; Hanzel, Jurij; Ivanovski, Tamara Knezevic; Krustins, Eduards; Noor, Nurulamin; O'Morain, Neil; Rodríguez-Lago, Iago; Scharl, Michael; Tua, Julia; Uzzan, Mathieu; Ali Yassin, Nuha; Baert, Filip; Langholz, Ebbe

    2018-04-27

    The 'United Registries for Clinical Assessment and Research' [UR-CARE] database is an initiative of the European Crohn's and Colitis Organisation [ECCO] to facilitate daily patient care and research studies in inflammatory bowel disease [IBD]. Herein, we sought to validate the database by using fictional case histories of patients with IBD that were to be entered by observers of varying experience in IBD. Nineteen observers entered five patient case histories into the database. After 6 weeks, all observers entered the same case histories again. For each case history, 20 key variables were selected to calculate the accuracy for each observer. We assumed that the database was such that ≥ 90% of the entered data would be correct. The overall proportion of correctly entered data was calculated using a beta-binomial regression model to account for inter-observer variation and compared to the expected level of validity. Re-test reliability was assessed using McNemar's test. For all case histories, the overall proportion of correctly entered items and their confidence intervals included the target of 90% (Case 1: 92% [88-94%]; Case 2: 87% [83-91%]; Case 3: 93% [90-95%]; Case 4: 97% [94-99%]; Case 5: 91% [87-93%]). These numbers did not differ significantly from those found 6 weeks later [NcNemar's test p > 0.05]. The UR-CARE database appears to be feasible, valid and reliable as a tool and easy to use regardless of prior user experience and level of clinical IBD experience. UR-CARE has the potential to enhance future European collaborations regarding clinical research in IBD.

  14. An expression database for roots of the model legume Medicago truncatula under salt stress

    PubMed Central

    2009-01-01

    Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/. PMID:19906315

  15. An expression database for roots of the model legume Medicago truncatula under salt stress.

    PubMed

    Li, Daofeng; Su, Zhen; Dong, Jiangli; Wang, Tao

    2009-11-11

    Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.

  16. Lessons learned from process incident databases and the process safety incident database (PSID) approach sponsored by the Center for Chemical Process Safety.

    PubMed

    Sepeda, Adrian L

    2006-03-17

    Learning from the experiences of others has long been recognized as a valued and relatively painless process. In the world of process safety, this learning method is an essential tool since industry has neither the time and resources nor the willingness to experience an incident before taking corrective or preventative steps. This paper examines the need for and value of process safety incident databases that collect incidents of high learning value and structure them so that needed information can be easily and quickly extracted. It also explores how they might be used to prevent incidents by increasing awareness and by being a tool for conducting PHAs and incident investigations. The paper then discusses how the CCPS PSID meets those requirements, how PSID is structured and managed, and its attributes and features.

  17. Generic Entity Resolution in Relational Databases

    NASA Astrophysics Data System (ADS)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  18. Algorithm to calculate proportional area transformation factors for digital geographic databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.

    1983-01-01

    A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less

  19. General physicians: born or made? The use of a tracking database to answer medical workforce questions.

    PubMed

    Poole, P; McHardy, K; Janssen, A

    2009-07-01

    The aim of the study was to use a tracking database to investigate the perceived influence of various factors on career choices of New Zealand medical graduates and to examine specifically whether experiences at medical school may have an effect on a decision to become a general physician. Questionnaires were distributed to medical students in the current University of Auckland programme at entry and exit points. The surveys have been completed by two entry cohorts and an exit one since 2006. The response rates were 70 and 88% in the entry and exit groups, respectively. More than 75% of exiting students reported an interest in pursuing a career in general internal medicine. In 42%, this is a 'strong interest' in general medicine compared with 23% in the entry cohort (P < 0.0001). There is correlation between a positive experience in a clinical rotation and the reported level of interest in that specialty with those indicating a good experience likely to specify career intentions in that area. Having a positive experience in a clinical rotation, positive role models and flexibility in training are the most influential factors affecting career decisions in Auckland medical students. Only 11% of study respondents reported that student loan burden has a significant influence on career decisions. Quality experiences on attachments seem essential for undergraduates to promote interest in general medicine. There is potential for curriculum design and clinical experiences to be formulated to promote the 'making' of these doctors. Tracking databases will assist in answering some of these questions.

  20. Experiences and Preferences for End-of-Life Care for Young Adults with Cancer and Their Informal Carers: A Narrative Synthesis.

    PubMed

    Ngwenya, Nothando; Kenten, Charlotte; Jones, Louise; Gibson, Faith; Pearce, Susie; Flatley, Mary; Hough, Rachael; Stirling, L Caroline; Taylor, Rachel M; Wong, Geoff; Whelan, Jeremy

    2017-06-01

    To review the qualitative literature on experiences of and preferences for end-of-life care of people with cancer aged 16-40 years (young adults) and their informal carers. A systematic review using narrative synthesis of qualitative studies using the 2006 UK Economic and Social Research Council research methods program guidance. Seven electronic bibliographic databases, two clinical trials databases, and three relevant theses databases were searched from January 2004 to October 2015. Eighteen articles were included from twelve countries. The selected studies included at least 5% of their patient sample within the age range 16-40 years. The studies were heterogeneous in their aims, focus, and sample, but described different aspects of end-of-life care for people with cancer. Positive experiences included facilitating adaptive coping and receiving palliative home care, while negative experiences were loss of "self" and nonfacilitative services and environment. Preferences included a family-centered approach to care, honest conversations about end of life, and facilitating normality. There is little evidence focused on the end-of-life needs of young adults. Analysis of reports including some young adults does not explore experience or preferences by age; therefore, it is difficult to identify age-specific issues clearly. From this review, we suggest that supportive interventions and education are needed to facilitate open and honest communication at an appropriate level with young people. Future research should focus on age-specific evidence about the end-of-life experiences and preferences for young adults with cancer and their informal carers.

  1. Supplement to the Carcinogenic Potency Database (CPDB): Results ofanimal bioassays published in the general literature through 1997 and bythe National Toxicology Program in 1997-1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, Lois Swirsky; Manley, Neela B.; Slone, Thomas H.

    2005-04-08

    The Carcinogenic Potency Database (CPDB) is a systematic and unifying resource that standardizes the results of chronic, long-term animal cancer tests which have been conducted since the 1950s. The analyses include sufficient information on each experiment to permit research into many areas of carcinogenesis. Both qualitative and quantitative information is reported on positive and negative experiments that meet a set of inclusion criteria. A measure of carcinogenic potency, TD50 (daily dose rate in mg/kg body weight/day to induce tumors in half of test animals that would have remained tumor-free at zero dose), is estimated for each tissue-tumor combination reported. Thismore » article is the ninth publication of a chronological plot of the CPDB; it presents results on 560 experiments of 188 chemicals in mice, rats, and hamsters from 185 publications in the general literature updated through 1997, and from 15 Reports of the National Toxicology Program in 1997-1998. The test agents cover a wide variety of uses and chemical classes. The CPDB Web Site(http://potency.berkeley.edu/) presents the combined database of all published plots in a variety of formats as well as summary tables by chemical and by target organ, supplemental materials on dosing and survival, a detailed guide to using the plot formats, and documentation of methods and publications. The overall CPDB, including the results in this article, presents easily accessible results of 6153 experiments on 1485 chemicals from 1426 papers and 429 NCI/NTP (National Cancer Institute/National Toxicology program) Technical Reports. A tab-separated format of the full CPDB for reading the data into spreadsheets or database applications is available on the Web Site.« less

  2. The new interactive CESAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, P.B.; Yatabe, M.

    1987-01-01

    In this report the Nuclear Criticality Safety Analytical Methods Resource Center describes a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) database at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR database was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE Criticality Safety Analytical Sequences and the codes comprising those sequences. The database includes data for both cross section preparation and criticality safety calculations. 3 refs., 1 tab.

  3. A database for spectral image quality

    NASA Astrophysics Data System (ADS)

    Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve

    2015-01-01

    We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.

  4. ELISA-BASE: An Integrated Bioinformatics Tool for Analyzing and Tracking ELISA Microarray Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Amanda M.; Collett, James L.; Seurynck-Servoss, Shannon L.

    ELISA-BASE is an open-source database for capturing, organizing and analyzing protein enzyme-linked immunosorbent assay (ELISA) microarray data. ELISA-BASE is an extension of the BioArray Soft-ware Environment (BASE) database system, which was developed for DNA microarrays. In order to make BASE suitable for protein microarray experiments, we developed several plugins for importing and analyzing quantitative ELISA microarray data. Most notably, our Protein Microarray Analysis Tool (ProMAT) for processing quantita-tive ELISA data is now available as a plugin to the database.

  5. Interactive, Automated Management of Icing Data

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.

    2009-01-01

    IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.

  6. Chemical transformations of complex mixtures relevant to atmospheric processes: Laboratory and ambient studies

    NASA Astrophysics Data System (ADS)

    Samy, Shahryar (Shar)

    The study of atmospheric chemistry and chemical transformations, which are relevant to conditions in the ambient atmosphere require the investigation of complex mixtures. In the atmosphere, complex mixtures (e.g. diesel emissions) are continually evolving as a result of physical and chemical transformations. This dissertation examines the transformations of modern diesel emissions (DE) in a series of experiments conducted at the European Outdoor Simulation Chamber (EUPHORE) in Valencia, Spain. Experimental design challenges are addressed, and the development of a NOx removal technology (denuder) is described with results from the application of the newly developed NOx denuder in the most recent EUPHORE campaign (2006). In addition, the data from an ambient aerosol study that examines atmospheric transformation products is presented and discussed. Atmospheric transformations of DE and associated secondary organic aerosol (SOA) production, along with chemical characterization of polar organic compounds (POC) in the EUPHORE experiments, provides a valuable insight on the tranformations of modern DE in environmentally relevant atmospheres. The greatest SOA production occurred in DE with toluene addition experiments (>40%), followed by DE with HCHO (for OH radical generation) experiments. A small amount of SOA (3%) was observed for DE in dark with N2O5 (for NO3 radical production) experiments. Distinct POC formation in light versus dark experiments suggests the role of OH initiated reactions in these chamber atmospheres. A trend of increasing concentrations of dicarboxylic acids in light versus dark experiments was observed when evaluated on a compound group basis. The production of diacids (as a compound group) demonstrates a consistent indicator for photochemical transformation in relation to studies in the ambient atmosphere. The four toluene addition experiments in this study were performed at different [tol]o/[NOx]o ratios and displayed an average SOA %yield (in relation to toluene) of 5.3+/-1.6%, which is compared to past chamber studies that evaluated the impact of [tol]o/[NO x]o on SOA production in more simplified mixtures. Characterization of nitrated polycyclic aromatic hydrocarbons (NPAH, nitroarenes), which have been shown to be mutagenic and/or carcinogenic, was performed on time-intergrated samples from the EUPHORE experiments. NPAH concentrations indicated significant formation and/or degradation was taking place. An inter-experimental comparison showed that distinct gas (2-nitronaphthalene) and particle (2-nitrofluoranthene, 4-nitropyrene) phase NPAH production resulted in light versus dark experiments, and degradation most likely due to photolysis was observed for one of the most abundant NPAH (1-nitropyrene) in the ambient atmosphere. The evaluation of dark experiments in high and low NOx conditions, revealed a significantly higher concentration of gas phase NPAH (mostly due to 1-nitronaphthalene) in high NOx experiments. Electrophilic nitration on chamber surfaces or sampling media can not be ruled out as a possible mechanism for the elevated NPAH concentrations. Chapter 5 presents results from an aerosol sampling study at the Storm Peak Laboratory (SPL) (3210 MSL, 40.45° N, 106.74° W) in the winter of 2007. The unique geographical character of SPL allows for extended observations/sampling of the free tropospheric interface. Of 84 analytes included in the GC-MS method, over 50 individual water extractable POC were present at concentrations greater than 0.1 ngm-3. Diurnal averages over the sampling period revealed a higher total concentration of POC at night, 211 ngm-3 (105-265 ngm-3), versus day, 160 ngm-3 (137-205 ngm -3), which suggests a more aged nighttime aerosol character. During a snow event (Jan. 11-13, 2007), the concentrations of daytime dicarboxylic acids, which may be considered as atmospheric transformation products, were reduced. Lower actinic flux, reduced transport distance, and ice crystal scavenging may explain this variability. Further evaluation of compound ratios (e.g. diacids to monoacids/levoglucosan) and the sampling period dynamics was performed to delineate diurnal aerosol character.

  7. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less

  8. [Selected aspects of computer-assisted literature management].

    PubMed

    Reiss, M; Reiss, G

    1998-01-01

    We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.

  9. Remote collection and analysis of witness reports on flash floods

    NASA Astrophysics Data System (ADS)

    Gourley, J. J.; Erlingis, J. M.; Smith, T. M.; Ortega, K. L.; Hong, Y.

    2010-11-01

    SummaryTypically, flash floods are studied ex post facto in response to a major impact event. A complement to field investigations is developing a detailed database of flash flood events, including minor events and null reports (i.e., where heavy rain occurred but there was no flash flooding), based on public survey questions conducted in near-real time. The Severe hazards analysis and verification experiment (SHAVE) has been in operation at the National Severe Storms Laboratory (NSSL) in Norman, OK, USA during the summers since 2006. The experiment employs undergraduate students to analyse real-time products from weather radars, target specific regions within the conterminous US, and poll public residences and businesses regarding the occurrence and severity of hail, wind, tornadoes, and now flash floods. In addition to providing a rich learning experience for students, SHAVE has also been successful in creating high-resolution datasets of severe hazards used for algorithm and model verification. This paper describes the criteria used to initiate the flash flood survey, the specific questions asked and information entered to the database, and then provides an analysis of results for flash flood data collected during the summer of 2008. It is envisioned that specific details provided by the SHAVE flash flood observation database will complement databases collected by operational agencies (i.e., US National Weather Service Storm Data reports) and thus lead to better tools to predict the likelihood of flash floods and ultimately reduce their impacts on society.

  10. Effects of Solid Solution Strengthening Elements Mo, Re, Ru, and W on Transition Temperatures in Nickel-Based Superalloys with High γ'-Volume Fraction: Comparison of Experiment and CALPHAD Calculations

    NASA Astrophysics Data System (ADS)

    Ritter, Nils C.; Sowa, Roman; Schauer, Jan C.; Gruber, Daniel; Goehler, Thomas; Rettig, Ralf; Povoden-Karadeniz, Erwin; Koerner, Carolin; Singer, Robert F.

    2018-06-01

    We prepared 41 different superalloy compositions by an arc melting, casting, and heat treatment process. Alloy solid solution strengthening elements were added in graded amounts, and we measured the solidus, liquidus, and γ'-solvus temperatures of the samples by DSC. The γ'-phase fraction increased as the W, Mo, and Re contents were increased, and W showed the most pronounced effect. Ru decreased the γ'-phase fraction. Melting temperatures (i.e., solidus and liquidus) were increased by addition of Re, W, and Ru (the effect increased in that order). Addition of Mo decreased the melting temperature. W was effective as a strengthening element because it acted as a solid solution strengthener and increased the fraction of fine γ'-precipitates, thus improving precipitation strengthening. Experimentally determined values were compared with calculated values based on the CALPHAD software tools Thermo-Calc (databases: TTNI8 and TCNI6) and MatCalc (database ME-NI). The ME-NI database, which was specially adapted to the present investigation, showed good agreement. TTNI8 also showed good results. The TCNI6 database is suitable for computational design of complex nickel-based superalloys. However, a large deviation remained between the experiment results and calculations based on this database. It also erroneously predicted γ'-phase separations and failed to describe the Ru-effect on transition temperatures.

  11. Online database for documenting clinical pathology resident education.

    PubMed

    Hoofnagle, Andrew N; Chou, David; Astion, Michael L

    2007-01-01

    Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.

  12. Lessons Learned from Managing a Petabyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, J

    2005-01-20

    The amount of data collected and stored by the average business doubles each year. Many commercial databases are already approaching hundreds of terabytes, and at this rate, will soon be managing petabytes. More data enables new functionality and capability, but the larger scale reveals new problems and issues hidden in ''smaller'' terascale environments. This paper presents some of these new problems along with implemented solutions in the framework of a petabyte dataset for a large High Energy Physics experiment. Through experience with two persistence technologies, a commercial database and a file-based approach, we expose format-independent concepts and issues prevalent atmore » this new scale of computing.« less

  13. Understanding the Influence of Environment on Adults' Walking Experiences: A Meta-Synthesis Study.

    PubMed

    Dadpour, Sara; Pakzad, Jahanshah; Khankeh, Hamidreza

    2016-07-20

    The environment has an important impact on physical activity, especially walking. The relationship between the environment and walking is not the same as for other types of physical activity. This study seeks to comprehensively identify the environmental factors influencing walking and to show how those environmental factors impact on walking using the experiences of adults between the ages of 18 and 65. The current study is a meta-synthesis based on a systematic review. Seven databases of related disciplines were searched, including health, transportation, physical activity, architecture, and interdisciplinary databases. In addition to the databases, two journals were searched. Of the 11,777 papers identified, 10 met the eligibility criteria and quality for selection. Qualitative content analysis was used for analysis of the results. The four themes identified as influencing walking were "safety and security", "environmental aesthetics", "social relations", and "convenience and efficiency". "Convenience and efficiency" and "environmental aesthetics" could enhance the impact of "social relations" on walking in some aspects. In addition, "environmental aesthetics" and "social relations" could hinder the influence of "convenience and efficiency" on walking in some aspects. Given the results of the study, strategies are proposed to enhance the walking experience.

  14. Microsoft Enterprise Consortium: A Resource for Teaching Data Warehouse, Business Intelligence and Database Management Systems

    ERIC Educational Resources Information Center

    Kreie, Jennifer; Hashemi, Shohreh

    2012-01-01

    Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…

  15. SPIRE Data-Base Management System

    NASA Technical Reports Server (NTRS)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  16. Making proteomics data accessible and reusable: Current state of proteomics databases and repositories

    PubMed Central

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685

  17. Observations of HF backscatter decay rates from HAARP generated FAI

    NASA Astrophysics Data System (ADS)

    Bristow, William; Hysell, David

    2016-07-01

    Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996.

  18. Observations of HF backscatter decay rates from HAARP generated FAI

    NASA Astrophysics Data System (ADS)

    Bristow, W. A.; Hysell, D. L.

    2016-12-01

    Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996.

  19. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  20. A Database as a Service for the Healthcare System to Store Physiological Signal Data.

    PubMed

    Chang, Hsien-Tsung; Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.

  1. Data Mining the Ogle-II I-band Database for Eclipsing Binary Stars

    NASA Astrophysics Data System (ADS)

    Ciocca, M.

    2013-08-01

    The OGLE I-band database is a searchable database of quality photometric data available to the public. During Phase 2 of the experiment, known as "OGLE-II", I-band observations were made over a period of approximately 1,000 days, resulting in over 1010 measurements of more than 40 million stars. This was accomplished by using a filter with a passband near the standard Cousins Ic. The database of these observations is fully searchable using the mysql database engine, and provides the magnitude measurements and their uncertainties. In this work, a program of data mining the OGLE I-band database was performed, resulting in the discovery of 42 previously unreported eclipsing binaries. Using the software package Peranso (Vanmuster 2011) to analyze the light curves obtained from OGLE-II, the eclipsing types, the epochs and the periods of these eclipsing variables were determined, to one part in 106. A preliminary attempt to model the physical parameters of these binaries was also performed, using the Binary Maker 3 software (Bradstreet and Steelman 2004).

  2. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  3. A Database as a Service for the Healthcare System to Store Physiological Signal Data

    PubMed Central

    Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415

  4. Space Launch System Booster Separation Aerodynamic Database Development and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Chan, David T.; Pinier, Jeremy T.; Wilcox, Floyd J., Jr.; Dalle, Derek J.; Rogers, Stuart E.; Gomez, Reynaldo J.

    2016-01-01

    The development of the aerodynamic database for the Space Launch System (SLS) booster separation environment has presented many challenges because of the complex physics of the ow around three independent bodies due to proximity e ects and jet inter- actions from the booster separation motors and the core stage engines. This aerodynamic environment is dicult to simulate in a wind tunnel experiment and also dicult to simu- late with computational uid dynamics. The database is further complicated by the high dimensionality of the independent variable space, which includes the orientation of the core stage, the relative positions and orientations of the solid rocket boosters, and the thrust lev- els of the various engines. Moreover, the clearance between the core stage and the boosters during the separation event is sensitive to the aerodynamic uncertainties of the database. This paper will present the development process for Version 3 of the SLS booster separa- tion aerodynamic database and the statistics-based uncertainty quanti cation process for the database.

  5. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  6. Human grasping database for activities of daily living with depth, color and kinematic data streams.

    PubMed

    Saudabayev, Artur; Rysbek, Zhanibek; Khassenova, Raykhan; Varol, Huseyin Atakan

    2018-05-29

    This paper presents a grasping database collected from multiple human subjects for activities of daily living in unstructured environments. The main strength of this database is the use of three different sensing modalities: color images from a head-mounted action camera, distance data from a depth sensor on the dominant arm and upper body kinematic data acquired from an inertial motion capture suit. 3826 grasps were identified in the data collected during 9-hours of experiments. The grasps were grouped according to a hierarchical taxonomy into 35 different grasp types. The database contains information related to each grasp and associated sensor data acquired from the three sensor modalities. We also provide our data annotation software written in Matlab as an open-source tool. The size of the database is 172 GB. We believe this database can be used as a stepping stone to develop big data and machine learning techniques for grasping and manipulation with potential applications in rehabilitation robotics and intelligent automation.

  7. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  8. Experiences with the Application of Services Oriented Approaches to the Federation of Heterogeneous Geologic Data Resources

    NASA Astrophysics Data System (ADS)

    Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.

    2006-12-01

    The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.

  9. Confronting Future Risks of Global Water Stress and Sustainability: Avoided Changes Versus Adaptive Actions

    NASA Astrophysics Data System (ADS)

    Schlosser, C. A.; Strzepek, K. M.; Gao, X.; Fant, C.; Paltsev, S.; Monier, E.; Sokolov, A. P.; Winchester, N.; Chen, H.; Kicklighter, D. W.; Ejaz, Q.

    2016-12-01

    We examine the fate of global water resources under a range of self-consistent socio-economic projections using the MIT Integrated Global System Model (IGSM) under a range of plausible mitigation and adaptation scenarios of development to the water-energy-land systems and against an assessment of the results from the UN COP-21 meeting. We assess the trends of an index of managed water stress as well as unmet water demands as simulated by the Water Resource System within the IGSM framework (IGSM-WRS). The WRS is forced by the simulations of the global climate response, variations in regional climate pattern changes, as well as the socio-economic drivers from the IGSM scenarios. We focus on the changes in water-stress metrics in the coming decades and going into the latter half of this century brought about by our projected climate and socio-economic changes, as well as the total (additional) populations affected by increased stress. We highlight selected basins to demonstrate sensitivities and interplay between supply and demand, the uncertainties in global climate sensitivity as well as regional climate change, and their implications to assessing and reducing water risks and the populations affected by water scarcity. We also evaluate the impact of explicitly representing irrigated land and water scarcity in an economy-wide model on food prices, bioenergy production and deforestation both with and without a global carbon policy. We highlight the importance of adaptive measures that will be required, worldwide, to meet surface-water shortfalls even under more aggressive and certainly under intermediate climate mitigation pathways - and further analyses is presented in this context quantifying risks averted and their associated costs. In addition, we also demonstrate that the explicit representation of irrigated land within this intergrated modeling frameowork has a small impact on food, bioenergy and deforestation outcomes within the scenarios considered. Nevertheless, globally speaking the scenarios indicate that going into the latter half of the twentieth century, approximately one-and-a-half billion additional people will experience at least moderately stressed water conditions worldwide and of that 1 billion will be at least will be living within regions under heavily stressed water conditions.

  10. Research for and by Practitioners.

    ERIC Educational Resources Information Center

    Templin, Thomas J.; And Others

    1992-01-01

    Seven articles discuss research by and for practitioners. The topics include demystification of research for practitioners, experiences with helping teacher researchers, an application of a collaborative action research model, one health practitioner's experience, creating a dance research database, basic data analysis for nonresearchers, and why…

  11. Evaluation of personal digital assistant drug information databases for the managed care pharmacist.

    PubMed

    Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W

    2003-01-01

    Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.

  12. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors

    PubMed Central

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-01-01

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269

  13. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    PubMed

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  14. How can the research potential of the clinical quality databases be maximized? The Danish experience.

    PubMed

    Nørgaard, M; Johnsen, S P

    2016-02-01

    In Denmark, the need for monitoring of clinical quality and patient safety with feedback to the clinical, administrative and political systems has resulted in the establishment of a network of more than 60 publicly financed nationwide clinical quality databases. Although primarily devoted to monitoring and improving quality of care, the potential of these databases as data sources in clinical research is increasingly being recognized. In this review, we describe these databases focusing on their use as data sources for clinical research, including their strengths and weaknesses as well as future concerns and opportunities. The research potential of the clinical quality databases is substantial but has so far only been explored to a limited extent. Efforts related to technical, legal and financial challenges are needed in order to take full advantage of this potential. © 2016 The Association for the Publication of the Journal of Internal Medicine.

  15. An intermediary's perspective of online databases for local governments

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    Numerous public administration studies have indicated that local government agencies for a variety of reasons lack access to comprehensive information resources; furthermore, such entities are often unwilling or unable to share information regarding their own problem-solving innovations. The NASA/University of Kentucky Technology Applications Program devotes a considerable effort to providing scientific and technical information and assistance to local agencies, relying on its access to over 500 distinct online databases offered by 20 hosts. The author presents a subjective assessment, based on his own experiences, of several databases which may prove useful in obtaining information for this particular end-user community.

  16. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events.

    PubMed

    Bem, Daryl; Tressoldi, Patrizio; Rabeyron, Thomas; Duggan, Michael

    2015-01-01

    In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual's cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 (-10 ) with an effect size (Hedges' g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 (9), greatly exceeding the criterion value of 100 for "decisive evidence" in support of the experimental hypothesis. When DJB's original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 (-5), and the BF value is 3,853, again exceeding the criterion for "decisive evidence." The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense " p-hacking"-the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB's original experiments (0.22) and the closely related "presentiment" experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.

  17. The immune epitope database: a historical retrospective of the first decade.

    PubMed

    Salimi, Nima; Fleri, Ward; Peters, Bjoern; Sette, Alessandro

    2012-10-01

    As the amount of biomedical information available in the literature continues to increase, databases that aggregate this information continue to grow in importance and scope. The population of databases can occur either through fully automated text mining approaches or through manual curation by human subject experts. We here report our experiences in populating the National Institute of Allergy and Infectious Diseases sponsored Immune Epitope Database and Analysis Resource (IEDB, http://iedb.org), which was created in 2003, and as of 2012 captures the epitope information from approximately 99% of all papers published to date that describe immune epitopes (with the exception of cancer and HIV data). This was achieved using a hybrid model based on automated document categorization and extensive human expert involvement. This task required automated scanning of over 22 million PubMed abstracts followed by classification and curation of over 13 000 references, including over 7000 infectious disease-related manuscripts, over 1000 allergy-related manuscripts, roughly 4000 related to autoimmunity, and 1000 transplant/alloantigen-related manuscripts. The IEDB curation involves an unprecedented level of detail, capturing for each paper the actual experiments performed for each different epitope structure. Key to enabling this process was the extensive use of ontologies to ensure rigorous and consistent data representation as well as interoperability with other bioinformatics resources, including the Protein Data Bank, Chemical Entities of Biological Interest, and the NIAID Bioinformatics Resource Centers. A growing fraction of the IEDB data derives from direct submissions by research groups engaged in epitope discovery, and is being facilitated by the implementation of novel data submission tools. The present explosion of information contained in biological databases demands effective query and display capabilities to optimize the user experience. Accordingly, the development of original ways to query the database, on the basis of ontologically driven hierarchical trees, and display of epitope data in aggregate in a biologically intuitive yet rigorous fashion is now at the forefront of the IEDB efforts. We also highlight advances made in the realm of epitope analysis and predictive tools available in the IEDB. © 2012 The Authors. Immunology © 2012 Blackwell Publishing Ltd.

  18. An editor for pathway drawing and data visualization in the Biopathways Workbench.

    PubMed

    Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar

    2009-10-02

    Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.

  19. The Global Tracheostomy Collaborative: one institution's experience with a new quality improvement initiative.

    PubMed

    Lavin, Jennifer; Shah, Rahul; Greenlick, Hannah; Gaudreau, Philip; Bedwell, Joshua

    2016-01-01

    Given the low frequency of adverse events after tracheostomy, individual institutions struggle to collect outcome data to generate effective quality improvement protocols. The Global Tracheostomy Collaborative (GTC) is a multi-institutional, multi-disciplinary organization that utilizes a prospective database to collect data on patients undergoing tracheostomy. We describe our institution's preliminary experience with this collaborative. It was hypothesized that entry into the database would be non-burdensome and could be easily and accurately initiated by skilled specialists at the time of tracheostomy placement and completed at time of patient discharge. Demographic, diagnostic, and outcome data on children undergoing tracheostomy at our institution from January 2013 to June 2015 were entered into the GTC database, a database collected and managed by REDCap (Research Electronic Data Capture). All data entry was performed by pediatric otolaryngology fellows and all post-operative updates were completed by a skilled tracheostomy nurse. Tracked outcomes included accidental decannulation, failed decannulation, tracheostomy tube obstruction, bleeding/tracheoinnominate fistula, and tracheocutaneous fistula. Data from 79 patients undergoing tracheostomy at our institution were recorded. Database entry was straightforward and entry of patient demographic information, medical comorbidities, surgical indications, and date of tracheostomy placement was completed in less than 5min per patient. The most common indication for surgery was facilitation of ventilation in 65 patients (82.3%). Average time from admission to tracheostomy was 62.6 days (range 0-246). Stomal breakdown was seen in 1 patient. A total of 72 patients were tracked to hospital discharge with 53 patients surviving (88.3%). No mortalities were tracheostomy-related. The Global Tracheostomy Collaborative is a multi-institutional, multi-disciplinary collaborative that collects data on patients undergoing tracheostomy. Our experience proves proof of concept of entering demographics and outcome data into the GTC database in a manner that was both accurate and not burdensome to those participating in data entry. In our tertiary care, pediatric academic medical center, tracheostomy continues to be a safe procedure with no major tracheostomy-related morbidities occurring in this patient population involvement with the GTC has shown opportunities for improvement in communication and coordination with other tracheostomy-related disciplines. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. The Deep Impact Network Experiment Operations Center Monitor and Control System

    NASA Technical Reports Server (NTRS)

    Wang, Shin-Ywan (Cindy); Torgerson, J. Leigh; Schoolcraft, Joshua; Brenman, Yan

    2009-01-01

    The Interplanetary Overlay Network (ION) software at JPL is an implementation of Delay/Disruption Tolerant Networking (DTN) which has been proposed as an interplanetary protocol to support space communication. The JPL Deep Impact Network (DINET) is a technology development experiment intended to increase the technical readiness of the JPL implemented ION suite. The DINET Experiment Operations Center (EOC) developed by JPL's Protocol Technology Lab (PTL) was critical in accomplishing the experiment. EOC, containing all end nodes of simulated spaces and one administrative node, exercised publish and subscribe functions for payload data among all end nodes to verify the effectiveness of data exchange over ION protocol stacks. A Monitor and Control System was created and installed on the administrative node as a multi-tiered internet-based Web application to support the Deep Impact Network Experiment by allowing monitoring and analysis of the data delivery and statistics from ION. This Monitor and Control System includes the capability of receiving protocol status messages, classifying and storing status messages into a database from the ION simulation network, and providing web interfaces for viewing the live results in addition to interactive database queries.

  1. Peptide Mass Fingerprinting of Egg White Proteins

    ERIC Educational Resources Information Center

    Alty, Lisa T.; LaRiviere, Frederick J.

    2016-01-01

    Use of advanced mass spectrometry techniques in the undergraduate setting has burgeoned in the past decade. However, relatively few undergraduate experiments examine the proteomics tools of protein digestion, peptide accurate mass determination, and database searching, also known as peptide mass fingerprinting. In this experiment, biochemistry…

  2. Nature-based experiences and health of cancer survivors.

    PubMed

    Ray, Heather; Jakubec, Sonya L

    2014-11-01

    Although exposure to, and interaction with, natural environments are recognized as health-promoting, little is understood about the use of nature contact in treatment and rehabilitation for cancer survivors. This narrative review summarizes the literature exploring the influence of nature-based experiences on survivor health. Key databases included CINAHL, EMBASE, Medline, Web of Science, PubMed, PsycArticles, ProQuest, and Cancerlit databases. Sixteen articles met inclusion criteria and were reviewed. Four major categories emerged: 1) Dragon boat racing may enhance breast cancer survivor quality of life, 2) Natural environment may counteract attentional fatigue in newly diagnosed breast cancer survivors, 3) Adventure programs provide a positive experience for children and adolescent survivors, fostering a sense of belonging and self-esteem, and 4) Therapeutic landscapes may decrease state-anxiety, improving survivor health. This review contributes to a better understanding of the therapeutic effects of nature-based experiences on cancer survivor health, providing a point of entry for future study. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Current experiments in elementary particle physics. Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, H.; Armstrong, F.E.; von Przewoski, B.

    1994-08-01

    This report contains summaries of 568 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1988 are excluded. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, INS (Tokyo), ITEP (Moscow), IUCF (Bloomington), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

  4. Information access in a dual-task context: testing a model of optimal strategy selection.

    PubMed

    Wickens, C D; Seidler, K S

    1997-09-01

    Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.

  5. A Database of Woody Vegetation Responses to Elevated Atmospheric CO2 (NDP-072)

    DOE Data Explorer

    Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1999-01-01

    To perform a statistically rigorous meta-analysis of research results on the response by woody vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled. Eighty-four independent CO2-enrichment studies, covering 65 species and 35 response parameters, met the necessary criteria for inclusion in the database: reporting mean response, sample size, and variance of the response (either as standard deviation or standard error). Data were retrieved from the published literature and unpublished reports. This numeric data package contains a 29-field data set of CO2-exposure experiment responses by woody plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data set, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).

  6. Exploring Human Cognition Using Large Image Databases.

    PubMed

    Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S

    2016-07-01

    Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. Copyright © 2016 Cognitive Science Society, Inc.

  7. Information access in a dual-task context: testing a model of optimal strategy selection

    NASA Technical Reports Server (NTRS)

    Wickens, C. D.; Seidler, K. S.

    1997-01-01

    Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.

  8. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  9. [Construction of transgenic tobacco expressing popW and analysis of its biological phenotype].

    PubMed

    Wang, Cui; Liu, Hongxia; Cao, Jing; Wang, Chao; Guo, Jianhua

    2014-04-01

    In a previous study, we cloned popW from Ralstonia solanacearum strain ZJ3721, coding PopW, a new harpin protein. The procaryotically expressed PopW can induce resistance to Tobacco mosaic virus (TMV), enhance growth and improve quality of tobacco, when sprayed onto tobacco leaves. Here, we constructed an expression vector pB- popW by cloning popW into the bionary vector pBI121 and transformed it into Agrobacterium tumefaciens strain EHA105 via freeze-thaw method. Tobacco (Nicotiana tobacum cv. Xanthi nc.) transformation was conducted by infection of tobacco leaf discs with recombinant A. tumefaciens. After screening on MS medium containing kanamycin, PCR and RT-PCR analysis, 21 T3 lines were identified as positive transgenic. Genomic intergration and expression of the transferred gene were determined by PCR and RT-PCR. And GUS staining analysis indicated that the protein expressed in transgenic tobacco was bioactive and exhibited different expression levels among lines. Disease bioassays showed that the transgenic tobacco had enhanced resistance to TMV with biocontrol efficiency up to 54.25%. Transgenic tobacco also exhibited enhanced plant growth, the root length of 15 d old seedlings was 1.7 times longer than that of wild type tobacco. 60 d after transplanting to pots, the height, fresh weight and dry weight of transgenic tobacco were 1.4, 1.7, 1.8 times larger than that of wild type tobacco, respectively.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoak, T.E.; Klawitter, A.L.

    Fractured production trends in Piceance Basin Cretaceous-age Mesaverde Group gas reservoirs are controlled by subsurface structures. Because many of the subsurface structures are controlled by basement fault trends, a new interpretation of basement structure was performed using an integrated interpretation of Landsat Thematic Mapper (TM), side-looking airborne radar (SLAR), high altitude, false color aerial photography, gas and water production data, high-resolution aeromagnetic data, subsurface geologic information, and surficial fracture maps. This new interpretation demonstrates the importance of basement structures on the nucleation and development of overlying structures and associated natural fractures in the hydrocarbon-bearing section. Grand Valley, Parachute, Rulison, Plateau,more » Shire Gulch, White River Dome, Divide Creek and Wolf Creek fields all produce gas from fractured tight gas sand and coal reservoirs within the Mesaverde Group. Tectonic fracturing involving basement structures is responsible for development of permeability allowing economic production from the reservoirs. In this context, the significance of detecting natural fractures using the intergrated fracture detection technique is critical to developing tight gas resources. Integration of data from widely-available, relatively inexpensive sources such as high-resolution aeromagnetics, remote sensing imagery analysis and regional geologic syntheses provide diagnostic data sets to incorporate into an overall methodology for targeting fractured reservoirs. The ultimate application of this methodology is the development and calibration of a potent exploration tool to predict subsurface fractured reservoirs, and target areas for exploration drilling, and infill and step-out development programs.« less

  11. Manganese porphyrin decorated on DNA networks as quencher and mimicking enzyme for construction of ultrasensitive photoelectrochemistry aptasensor.

    PubMed

    Huang, Liaojing; Zhang, Li; Yang, Liu; Yuan, Ruo; Yuan, Yali

    2018-05-01

    In this work, the manganese porphyrin (MnPP) decorated on DNA networks could serve as quencher and mimicking enzyme to efficiently reduce the photocurrent of photoactive material 3,4,9,10-perylene tetracarboxylic acid (PTCA), which was elaborately used to construct a novel label-free aptasensor for ultrasensitive detection of thrombin (TB) in a signal-off manner. The Au-doped PTCA (PTCA-PEI-Au) with outstanding membrane-forming and photoelectric property was modified on electrode to acquire a strong initial photoelectrochemistry (PEC) signal. Afterward, target binding aptamer Ι (TBAΙ) was modified on electrode to specially recognize target TB, which could further combine with TBAII and single-stranded DNA P1-modified platinum nanoparticles (TBAII-PtNPs-P1) for immobilizing DNA networks with abundant MnPP. Ingeniously, the MnPP could not only directly quench the photocurrent of PTCA, but also acted as hydrogen peroxide (HRP) mimicking enzyme to remarkably stimulate the deposition of benzo-4-chlorhexidine (4-CD) on electrode for further decreasing the photocurrent of PTCA, thereby obtaining a definitely low photocurrent for detection of TB. As a result, the proposed PEC aptasensor illustrated excellent sensitivity with a low detection limit down to 3 fM, exploiting a new avenue about intergrating two functions in one substance for ultrasensitive biological monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The last deglacial retreat history of the East Antarctic Ice Sheet recorded in sediments from off the Wilkes Land Coast

    NASA Astrophysics Data System (ADS)

    Yokoyama, Y.; Yamane, M.; Miyairi, Y.; Suga, H.; Dunbar, R. B.; Ohkouchi, N.

    2017-12-01

    Timing of past ice sheet retreat of Antarctic continent has been debated with regards to the global sea level changes since the Last Glacial Maximum (LGM) centered at around 20 ka. Exposure dating using cosmogenic radio nuclide (CRN) for glacial deposits have been widely used to reconstruct the last deglacial history though this cannot apply where no-ice free coasts are existed. One such location is the Wilkes Land where the East Antarctic Ice Sheet (EAIS) is situated directory on seafloor. Sediment cores obtained off the Wilkes Land coast successfully retrieved cores during the Intergrated Ocean Drilling Program (IODP) Expedition 318 (Escuita et al., 2011). Major obstacle to obtain reliable chronology for marine cores around Antarctica is sparsity of carbonate materials such as foraminifera. Thus compound-specific radiocarbon analysis (CSRA) has been used and we applied CSRA to the sediments obtained off the Wilkes land coast. The CSRA targeted C16 and C16:1 fatty acid due to their high degradation rate. Hence low concentrations of these compounds are expected. We found major sedimentation occurred since the beginning of Holocene. The result is then compared to the previously reported dates from the land based CRN dates (eg., Mckintosh et al., 2013; Yamane et al., 2011) to discuss the timing of retreat of EAIS.

  13. Analysis of adverse events with Essure hysteroscopic sterilization reported to the Manufacturer and User Facility Device Experience database.

    PubMed

    Al-Safi, Zain A; Shavell, Valerie I; Hobson, Deslyn T G; Berman, Jay M; Diamond, Michael P

    2013-01-01

    The Manufacturer and User Facility Device Experience database may be useful for clinicians using a Food and Drug Administration-approved medical device to identify the occurrence of adverse events and complications. We sought to analyze and investigate reports associated with the Essure hysteroscopic sterilization system (Conceptus Inc., Mountain View, CA) using this database. Retrospective review of the Manufacturer and User Facility Device Experience database for events related to Essure hysteroscopic sterilization from November 2002 to February 2012 (Canadian Task Force Classification III). Online retrospective review. Online reports of patients who underwent Essure tubal sterilization. Essure tubal sterilization. Four hundred fifty-seven adverse events were reported in the study period. Pain was the most frequently reported event (217 events [47.5%]) followed by delivery catheter malfunction (121 events [26.4%]). Poststerilization pregnancy was reported in 61 events (13.3%), of which 29 were ectopic pregnancies. Other reported events included perforation (90 events [19.7%]), abnormal bleeding (44 events [9.6%]), and microinsert malposition (33 events [7.2%]). The evaluation and management of these events resulted in an additional surgical procedure in 270 cases (59.1%), of which 44 were hysterectomies. Sixty-one unintended poststerilization pregnancies were reported in the study period, of which 29 (47.5%) were ectopic gestations. Thus, ectopic pregnancy must be considered if a woman becomes pregnant after Essure hysteroscopic sterilization. Additionally, 44 women underwent hysterectomy after an adverse event reported to be associated with the use of the device. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  14. Mysql Data-Base Applications for Dst-Like Physics Analysis

    NASA Astrophysics Data System (ADS)

    Tsenov, Roumen

    2004-07-01

    The data and analysis model developed and being used in the HARP experiment for studying hadron production at CERN Proton Synchrotron is discussed. Emphasis is put on usage of data-base (DB) back-ends for persistent storing and retrieving "alive" C++ objects encapsulating raw and reconstructed data. Concepts of "Data Summary Tape" (DST) as a logical collection of DB-persistent data of different types, and of "intermediate DST" (iDST) as a physical "tag" of DST, are introduced. iDST level of persistency allows a powerful, DST-level of analysis to be performed by applications running on an isolated machine (even laptop) with no connection to the experiment's main data storage. Implementation of these concepts is considered.

  15. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes.

    PubMed

    Koulaouzidis, Anastasios; Iakovidis, Dimitris K; Yung, Diana E; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio

    2017-06-01

     Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE.  Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers.  The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %).  MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.

  16. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  17. Building a Patient-Reported Outcome Metric Database: One Hospital's Experience.

    PubMed

    Rana, Adam J

    2016-06-01

    A number of provisions exist within the Patient Protection and Affordable Care Act that focus on improving the delivery of health care in the United States, including quality of care. From a total joint arthroplasty perspective, the issue of quality increasingly refers to quantifying patient-reported outcome metrics (PROMs). This article describes one hospital's experience in building and maintaining an electronic PROM database for a practice of 6 board-certified orthopedic surgeons. The surgeons advocated to and worked with the hospital to contract with a joint registry database company and hire a research assistant. They implemented a standardized process for all surgical patients to fill out patient-reported outcome questionnaires at designated intervals. To date, the group has collected patient-reported outcome metric data for >4500 cases. The data are frequently used in different venues at the hospital including orthopedic quality metric and research meetings. In addition, the results were used to develop an annual outcome report. The annual report is given to patients and primary care providers, and portions of it are being used in discussions with insurance carriers. Building an electronic database to collect PROMs is a group undertaking and requires a physician champion. A considerable amount of work needs to be done up front to make its introduction a success. Once established, a PROM database can provide a significant amount of information and data that can be effectively used in multiple capacities. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  19. Saudi anti-human cancer plants database (SACPD): A collection of plants with anti-human cancer activities

    PubMed Central

    Al-Zahrani, Ateeq Ahmed

    2018-01-01

    Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd PMID:29774137

  20. Complications of Electromechanical Morcellation Reported in the Manufacturer and User Facility Device Experience (MAUDE) Database.

    PubMed

    Naumann, R Wendel; Brown, Jubilee

    2015-01-01

    To evaluate adverse events associated with electromechanical morcellation as reported to the Manufacturer and User Facility Device Experience (MAUDE) database. Retrospective analysis of an established database (Canadian Task Force classification III). A search of the MAUDE database for terms associated with commercially available electromechanical morcellation devices was undertaken for events leading to injury or death between 2004 and 2014. Data, including the types of injury, need for conversion to open surgery, type of open surgery, and clinical outcomes, were extracted from the records. Over a 10-year period, 9 events associated with death and 215 events associated with patient injury or significant delay of the surgical procedure were recorded. These involved 137 device failures, 51 organ injuries, and the morcellation of 27 previously undiagnosed malignancies. Of the 9 deaths, 1 was associated with organ injury, and the other 8 were associated with morcellation of cancer. Of the 27 undiagnosed cancers, 5 were reported by the manufacturer, 8 were reported by the patient or family, 9 were reported by medical or news reports, 2 were reported by medical professionals, and 3 were due to litigation. Morcellation of an undiagnosed malignancy was first reported to the database in December 2013. The MAUDE database appears to detect perioperative events, such as device failures and organ injury at the time of surgery, but appears to be poor at detecting late events after surgery, such as the potential spread of cancer. Outcome registries are likely a more efficient means of tracking potential long-term adverse events associated with surgical devices. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.

  1. Saudi anti-human cancer plants database (SACPD): A collection of plants with anti-human cancer activities.

    PubMed

    Al-Zahrani, Ateeq Ahmed

    2018-01-30

    Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd.

  2. Current Experiments in Particle Physics. 1996 Edition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, Hrvoje

    2003-06-27

    This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.

  3. Current experiments in elementary particle physics. Revised

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, H.; Wohl, C.G.; Armstrong, B.

    This report contains summaries of 584 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1986 are excluded. Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, KEK, LAMPF, Novosibirsk, Paul Scherrer Institut (PSI), Saclay, Serpukhov, SLAC, SSCL, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

  4. A Data Warehouse to Support Condition Based Maintenance (CBM)

    DTIC Science & Technology

    2005-05-01

    Application ( VBA ) code sequence to import the original MAST-generated CSV and then create a single output table in DBASE IV format. The DBASE IV format...database architecture (Oracle, Sybase, MS- SQL , etc). This design includes table definitions, comments, specification of table attributes, primary and foreign...built queries and applications. Needs the application developers to construct data views. No SQL programming experience. b. Power Database User - knows

  5. Thermodynamic database for the Co-Pr system.

    PubMed

    Zhou, S H; Kramer, M J; Meng, F Q; McCallum, R W; Ott, R T

    2016-03-01

    In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.

  6. Information Literacy for Users at the National Medical Library of Cuba: Cochrane Library Course for the Search of Best Evidence for Clinical Decisions

    ERIC Educational Resources Information Center

    Santana Arroyo, Sonia; del Carmen Gonzalez Rivero, Maria

    2012-01-01

    The National Medical Library of Cuba is currently developing an information literacy program to train users in the use of biomedical databases. This paper describes the experience with the course "Cochrane Library: Evidence-Based Medicine," which aims to teach users how to make the best use of this database, as well as the evidence-based…

  7. Thermodynamic database for the Co-Pr system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S. H.; Kramer, M. J.; Meng, F. Q.

    2016-01-21

    In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.

  8. Thermodynamic database for the Co-Pr system

    DOE PAGES

    Zhou, S. H.; Kramer, M. J.; Meng, F. Q.; ...

    2016-03-01

    In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitled ''Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.''

  9. Quality Attribute-Guided Evaluation of NoSQL Databases: An Experience Report

    DTIC Science & Technology

    2014-10-18

    detailed technical evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study... big data , software systems [Agarwal 2011]. Internet-born organizations such as Google and Amazon are at the cutting edge of this revolution...Chang 2008], along with those of numerous other big data innovators, have made a variety of open source and commercial data management technologies

  10. Semantically enabled and statistically supported biological hypothesis testing with tissue microarray databases

    PubMed Central

    2011-01-01

    Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584

  11. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    PubMed

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Home literacy experiences and early childhood disability: a descriptive study using the National Household Education Surveys (NHES) program database.

    PubMed

    Breit-Smith, Allison; Cabell, Sonia Q; Justice, Laura M

    2010-01-01

    The present article illustrates how the National Household Education Surveys (NHES; U.S. Department of Education, 2009) database might be used to address questions of relevance to researchers who are concerned with literacy development among young children. Following a general description of the NHES database, a study is provided that examines the extent to which parent-reported home literacy activities and child emergent literacy skills differ for children with (a) developmental disabilities versus those who are developing typically, (b) single disability versus multiple disabilities, and (c) speech-language disability only versus other types of disabilities. Four hundred and seventy-eight preschool-age children with disabilities and a typically developing matched sample (based on parent report) were identified in the 2005 administration of the Early Childhood Program Participation (ECPP) Survey in the NHES database. Parent responses to survey items were then compared between groups. After controlling for age and socioeconomic status, no significant differences were found in the frequency of home literacy activities for children with and without disabilities. Parents reported higher levels of emergent literacy skills for typically developing children relative to children with disabilities. These findings suggest the importance of considering the home literacy experiences and emergent literacy skills of young children with disabilities when making clinical recommendations.

  13. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    PubMed Central

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511

  14. Immediate dissemination of student discoveries to a model organism database enhances classroom-based research experiences.

    PubMed

    Wiley, Emily A; Stover, Nicholas A

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.

  15. Glance Information System for ATLAS Management

    NASA Astrophysics Data System (ADS)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  16. Empirical cost models for estimating power and energy consumption in database servers

    NASA Astrophysics Data System (ADS)

    Valdivia Garcia, Harold Dwight

    The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.

  17. An emerging cyberinfrastructure for biodefense pathogen and pathogen-host data.

    PubMed

    Zhang, C; Crasta, O; Cammer, S; Will, R; Kenyon, R; Sullivan, D; Yu, Q; Sun, W; Jha, R; Liu, D; Xue, T; Zhang, Y; Moore, M; McGarvey, P; Huang, H; Chen, Y; Zhang, J; Mazumder, R; Wu, C; Sobral, B

    2008-01-01

    The NIAID-funded Biodefense Proteomics Resource Center (RC) provides storage, dissemination, visualization and analysis capabilities for the experimental data deposited by seven Proteomics Research Centers (PRCs). The data and its publication is to support researchers working to discover candidates for the next generation of vaccines, therapeutics and diagnostics against NIAID's Category A, B and C priority pathogens. The data includes transcriptional profiles, protein profiles, protein structural data and host-pathogen protein interactions, in the context of the pathogen life cycle in vivo and in vitro. The database has stored and supported host or pathogen data derived from Bacillus, Brucella, Cryptosporidium, Salmonella, SARS, Toxoplasma, Vibrio and Yersinia, human tissue libraries, and mouse macrophages. These publicly available data cover diverse data types such as mass spectrometry, yeast two-hybrid (Y2H), gene expression profiles, X-ray and NMR determined protein structures and protein expression clones. The growing database covers over 23 000 unique genes/proteins from different experiments and organisms. All of the genes/proteins are annotated and integrated across experiments using UniProt Knowledgebase (UniProtKB) accession numbers. The web-interface for the database enables searching, querying and downloading at the level of experiment, group and individual gene(s)/protein(s) via UniProtKB accession numbers or protein function keywords. The system is accessible at http://www.proteomicsresource.org/.

  18. Understanding the Influence of Environment on Adults’ Walking Experiences: A Meta-Synthesis Study

    PubMed Central

    Dadpour, Sara; Pakzad, Jahanshah; Khankeh, Hamidreza

    2016-01-01

    The environment has an important impact on physical activity, especially walking. The relationship between the environment and walking is not the same as for other types of physical activity. This study seeks to comprehensively identify the environmental factors influencing walking and to show how those environmental factors impact on walking using the experiences of adults between the ages of 18 and 65. The current study is a meta-synthesis based on a systematic review. Seven databases of related disciplines were searched, including health, transportation, physical activity, architecture, and interdisciplinary databases. In addition to the databases, two journals were searched. Of the 11,777 papers identified, 10 met the eligibility criteria and quality for selection. Qualitative content analysis was used for analysis of the results. The four themes identified as influencing walking were “safety and security”, “environmental aesthetics”, “social relations”, and “convenience and efficiency”. “Convenience and efficiency” and “environmental aesthetics” could enhance the impact of “social relations” on walking in some aspects. In addition, “environmental aesthetics” and “social relations” could hinder the influence of “convenience and efficiency” on walking in some aspects. Given the results of the study, strategies are proposed to enhance the walking experience. PMID:27447660

  19. New tools and methods for direct programmatic access to the dbSNP relational database.

    PubMed

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  20. LEPER: Library of Experimental PhasE Relations

    NASA Astrophysics Data System (ADS)

    Davis, F.; Gordon, S.; Mukherjee, S.; Hirschmann, M.; Ghiorso, M.

    2006-12-01

    The Library of Experimental PhasE Relations (LEPER) seeks to compile published experimental determinations of magmatic phase equilibria and provide those data on the web with a searchable and downloadable interface. Compiled experimental data include the conditions and durations of experiments, the bulk compositions of experimental charges, and the identity, compositions and proportions of phases observed, and, where available, estimates of experimental and analytical uncertainties. Also included are metadata such as the type of experimental device, capsule material, and method(s) of quantitative analysis. The database may be of use to practicing experimentalists as well as the wider Earth science community. Experimentalists may find the data useful for planning new experiments and will easily be able to compare their results to the full body of previous experimentnal data. Geologists may use LEPER to compare rocks sampled in the field with experiments performed on similar bulk composition or with experiments that produced similar-composition product phases. Modelers may use LEPER to parameterize partial melting of various lithologies. One motivation for compiling LEPER is for calibration of updated and revised versions of MELTS, however, it is hoped that the availability of LEPER will facilitate formulation and calibration of additional thermodynamic or empirical models of magmatic phase relations and phase equilibria, geothermometers and more. Data entry for LEPER is occuring presently: As of August, 2006, >6200 experiments have been entered, chiefly from work published between 1997 and 2005. A prototype web interface has been written and beta release on the web is anticipated in Fall, 2006. Eventually, experimentalists will be able to submit their new experimental data to the database via the web. At present, the database contains only data pertaining to the phase equilibria of silicate melts, but extension to other experimental data involving other fluids or sub-solidus phase equilibria may be contemplated. Also, the data are at present limited to natural or near-natural systems, but in the future, extension to synthetic (i.e., CMAS, etc.) systems is also possible. Each would depend in part on whether there is community demand for such databases. A trace element adjunct to LEPER is presently in planning stages.

  1. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  2. Developing Teaching Skills in Physical Education.

    ERIC Educational Resources Information Center

    Siedentop, Daryl

    This textbook attempts to clarify the nature of teaching during the field experience or simulation of that experience for student teachers. The text takes a data-based approach to the development of teaching skills. It is divided into seven chapters. The first chapter, "Systematic Improvement of Teaching Skills," is a narrative…

  3. Administering a Web-Based Course on Database Technology

    ERIC Educational Resources Information Center

    de Oliveira, Leonardo Rocha; Cortimiglia, Marcelo; Marques, Luis Fernando Moraes

    2003-01-01

    This article presents a managerial experience with a web-based course on data base technology for enterprise management. The course has been developed and managed by a Department of Industrial Engineering in Brazil in a Public University. Project's managerial experiences are described covering its conception stage where the Virtual Learning…

  4. Black African Parents' Experiences of an Educational Psychology Service

    ERIC Educational Resources Information Center

    Lawrence, Zena

    2014-01-01

    The evidence base that explores Black African parents' experiences of an Educational Psychology Service (EPS) is limited. This article describes an exploratory mixed methods research study undertaken during 2009-2011, that explored Black African parents' engagement with a UK EPS. Quantitative data were gathered from the EPS preschool database and…

  5. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  6. SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.

    PubMed

    Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J

    2007-01-01

    Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.

  7. The development of a prototype intelligent user interface subsystem for NASA's scientific database systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.

  8. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.

  9. CHIP Demonstrator: Semantics-Driven Recommendations and Museum Tour Generation

    NASA Astrophysics Data System (ADS)

    Aroyo, Lora; Stash, Natalia; Wang, Yiwen; Gorgels, Peter; Rutledge, Lloyd

    The main objective of the CHIP project is to demonstrate how Semantic Web technologies can be deployed to provide personalized access to digital museum collections. We illustrate our approach with the digital database ARIA of the Rijksmuseum Amsterdam. For the semantic enrichment of the Rijksmuseum ARIA database we collaborated with the CATCH STITCH project to produce mappings to Iconclass, and with the MultimediaN E-culture project to produce the RDF/OWL of the ARIA and Adlib databases. The main focus of CHIP is on exploring the potential of applying adaptation techniques to provide personalized experience for the museum visitors both on the Web site and in the museum.

  10. The HyperLeda project en route to the astronomical virtual observatory

    NASA Astrophysics Data System (ADS)

    Golev, V.; Georgiev, V.; Prugniel, Ph.

    2002-07-01

    HyperLeda (Hyper-Linked Extragalactic Databases and Archives) is aimed to study the evolution of galaxies, their kinematics and stellar populations and the structure of Local Universe. HyperLeda is involved in catalogue and software production, data-mining and massive data processing. The products are serviced to the community through web mirrors. The development of HyperLeda is distributed between different sites and is based on the background experience of the LEDA and Hypercat databases. The HyperLeda project is focused both on the European iAstro colaboration and as a unique database for studies of the physics of the extragalactic objects.

  11. Test of the Behavioral Perspective Model in the Context of an E-Mail Marketing Experiment

    ERIC Educational Resources Information Center

    Sigurdsson, Valdimar; Menon, R. G. Vishnu; Sigurdarson, Johannes Pall; Kristjansson, Jon Skafti; Foxall, Gordon R.

    2013-01-01

    An e-mail marketing experiment based on the behavioral perspective model was conducted to investigate consumer choice. Conversion e-mails were sent to two groups from the same marketing database of registered consumers interested in children's books. The experiment was based on A-B-A-C-A and A-C-A-B-A withdrawal designs and consisted of sending B…

  12. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.

    PubMed

    Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin

    2014-02-01

    Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. James Webb Space Telescope XML Database: From the Beginning to Today

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Fatig, Curtis C.

    2005-01-01

    The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.

  15. The Israeli National Genetic database: a 10-year experience.

    PubMed

    Zlotogora, Joël; Patrinos, George P

    2017-03-16

    The Israeli National and Ethnic Mutation database ( http://server.goldenhelix.org/israeli ) was launched in September 2006 on the ETHNOS software to include clinically relevant genomic variants reported among Jewish and Arab Israeli patients. In 2016, the database was reviewed and corrected according to ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar ) and ExAC ( http://exac.broadinstitute.org ) database entries. The present article summarizes some key aspects from the development and continuous update of the database over a 10-year period, which could serve as a paradigm of successful database curation for other similar resources. In September 2016, there were 2444 entries in the database, 890 among Jews, 1376 among Israeli Arabs, and 178 entries among Palestinian Arabs, corresponding to an ~4× data content increase compared to when originally launched. While the Israeli Arab population is much smaller than the Jewish population, the number of pathogenic variants causing recessive disorders reported in the database is higher among Arabs (934) than among Jews (648). Nevertheless, the number of pathogenic variants classified as founder mutations in the database is smaller among Arabs (175) than among Jews (192). In 2016, the entire database content was compared to that of other databases such as ClinVar and ExAC. We show that a significant difference in the percentage of pathogenic variants from the Israeli genetic database that were present in ExAC was observed between the Jewish population (31.8%) and the Israeli Arab population (20.6%). The Israeli genetic database was launched in 2006 on the ETHNOS software and is available online ever since. It allows querying the database according to the disorder and the ethnicity; however, many other features are not available, in particular the possibility to search according to the name of the gene. In addition, due to the technical limitations of the previous ETHNOS software, new features and data are not included in the present online version of the database and upgrade is currently ongoing.

  16. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  17. PHASE I MATERIALS PROPERTY DATABASE DEVELOPMENT FOR ASME CODES AND STANDARDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Weiju; Lin, Lianshan

    2013-01-01

    To support the ASME Boiler and Pressure Vessel Codes and Standard (BPVC) in modern information era, development of a web-based materials property database is initiated under the supervision of ASME Committee on Materials. To achieve efficiency, the project heavily draws upon experience from development of the Gen IV Materials Handbook and the Nuclear System Materials Handbook. The effort is divided into two phases. Phase I is planned to deliver a materials data file warehouse that offers a depository for various files containing raw data and background information, and Phase II will provide a relational digital database that provides advanced featuresmore » facilitating digital data processing and management. Population of the database will start with materials property data for nuclear applications and expand to data covering the entire ASME Code and Standards including the piping codes as the database structure is continuously optimized. The ultimate goal of the effort is to establish a sound cyber infrastructure that support ASME Codes and Standards development and maintenance.« less

  18. The Brain Database: A Multimedia Neuroscience Database for Research and Teaching

    PubMed Central

    Wertheim, Steven L.

    1989-01-01

    The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made. This tool is intended to serve as one component of a system that would allow neuroscientists and clinicians 1) to represent clinical and experimental data within a common framework 2) to compare results precisely between experiments and among laboratories, 3) to use computing tools as an aid in collaborative work and 4) to contribute to a shared and accessible body of knowledge about the nervous system.

  19. US and foreign alloy cross-reference database

    NASA Technical Reports Server (NTRS)

    Springer, John M.; Morgan, Steven H.

    1991-01-01

    Marshall Space Flight Center and other NASA installations have a continuing requirement for materials data from other countries involved with the development of joint international Spacelab experiments and other hardware. This need includes collecting data for common alloys to ascertain composition, physical properties, specifications, and designations. This data is scattered throughout a large number of specification statements, standards, handbooks, and other technical literature which make a manual search both tedious and often limited in extent. In recognition of this problem, a computerized database of information on alloys was developed along with the software necessary to provide the desired functions to access this data. The intention was to produce an initial database covering aluminum alloys, along with the program to provide a user-interface to the data, and then later to extend and refine the database to include other nonferrous and ferrous alloys.

  20. PARPs database: A LIMS systems for protein-protein interaction data mining or laboratory information management system

    PubMed Central

    Droit, Arnaud; Hunter, Joanna M; Rouleau, Michèle; Ethier, Chantal; Picard-Cloutier, Aude; Bourgais, David; Poirier, Guy G

    2007-01-01

    Background In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5. PMID:18093328

  1. Comparison of Stopping Power and Range Databases for Radiation Transport Study

    NASA Technical Reports Server (NTRS)

    Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.

    1997-01-01

    The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.

  2. The construction and assessment of a statistical model for the prediction of protein assay data.

    PubMed

    Pittman, J; Sacks, J; Young, S Stanley

    2002-01-01

    The focus of this work is the development of a statistical model for a bioinformatics database whose distinctive structure makes model assessment an interesting and challenging problem. The key components of the statistical methodology, including a fast approximation to the singular value decomposition and the use of adaptive spline modeling and tree-based methods, are described, and preliminary results are presented. These results are shown to compare favorably to selected results achieved using comparitive methods. An attempt to determine the predictive ability of the model through the use of cross-validation experiments is discussed. In conclusion a synopsis of the results of these experiments and their implications for the analysis of bioinformatic databases in general is presented.

  3. External access to ALICE controls conditions data

    NASA Astrophysics Data System (ADS)

    Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.

    2014-06-01

    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.

  4. Data-Driven User Feedback: An Improved Neurofeedback Strategy considering the Interindividual Variability of EEG Features.

    PubMed

    Han, Chang-Hee; Lim, Jeong-Hwan; Lee, Jun-Hak; Kim, Kangsan; Im, Chang-Hwan

    2016-01-01

    It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG) features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training.

  5. Data-Driven User Feedback: An Improved Neurofeedback Strategy considering the Interindividual Variability of EEG Features

    PubMed Central

    Lim, Jeong-Hwan; Lee, Jun-Hak; Kim, Kangsan

    2016-01-01

    It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG) features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training. PMID:27631005

  6. ExpTreeDB: web-based query and visualization of manually annotated gene expression profiling experiments of human and mouse from GEO.

    PubMed

    Ni, Ming; Ye, Fuqiang; Zhu, Juanjuan; Li, Zongwei; Yang, Shuai; Yang, Bite; Han, Lu; Wu, Yongge; Chen, Ying; Li, Fei; Wang, Shengqi; Bo, Xiaochen

    2014-12-01

    Numerous public microarray datasets are valuable resources for the scientific communities. Several online tools have made great steps to use these data by querying related datasets with users' own gene signatures or expression profiles. However, dataset annotation and result exhibition still need to be improved. ExpTreeDB is a database that allows for queries on human and mouse microarray experiments from Gene Expression Omnibus with gene signatures or profiles. Compared with similar applications, ExpTreeDB pays more attention to dataset annotations and result visualization. We introduced a multiple-level annotation system to depict and organize original experiments. For example, a tamoxifen-treated cell line experiment is hierarchically annotated as 'agent→drug→estrogen receptor antagonist→tamoxifen'. Consequently, retrieved results are exhibited by an interactive tree-structured graphics, which provide an overview for related experiments and might enlighten users on key items of interest. The database is freely available at http://biotech.bmi.ac.cn/ExpTreeDB. Web site is implemented in Perl, PHP, R, MySQL and Apache. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Review of the use of high potencies in basic research on homeopathy.

    PubMed

    Clausen, Jürgen; van Wijk, Roeland; Albrecht, Henning

    2011-10-01

    The HomBRex database includes details of about 1500 basic research experiments in homeopathy. A general overview on the experiments listed in the HomBRex database is presented, focusing on high dilutions and the different settings in which those were used. Though often criticised, many experiments with remedies diluted beyond Avogadro's number demonstrate specific effects. A total of 830 experiments employing high potencies was found; in 745 experiments of these (90%), at least one positive result was reported. Animals represent the most often used model system (n=371), followed by plants (n=201), human material (n=92), bacteria and viruses (n=37) and fungi (n=32). Arsenicum album (Ars.) is the substance most often applied (n=101), followed by Sulphur (Sulph.) and Thuja (Thuj.) (n=65 and 48, respectively). Proving, prophylactic and therapeutic study designs have all been used and appear appropriate for homeopathy basic research using high dilutions. The basic research data set to support specific effects unique to high dilutions and opposite to those observed with low dilutions is, to date, insufficient. Copyright © 2011 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  8. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.

    PubMed

    Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B

    2017-05-01

    To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.

  9. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes

    PubMed Central

    Koulaouzidis, Anastasios; Iakovidis, Dimitris K.; Yung, Diana E.; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N.; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio

    2017-01-01

    Background and aims  Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Methods  Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. Results  The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). Conclusion  MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential. PMID:28580415

  10. Experiments and Analysis on a Computer Interface to an Information-Retrieval Network.

    ERIC Educational Resources Information Center

    Marcus, Richard S.; Reintjes, J. Francis

    A primary goal of this project was to develop an interface that would provide direct access for inexperienced users to existing online bibliographic information retrieval networks. The experiment tested the concept of a virtual-system mode of access to a network of heterogeneous interactive retrieval systems and databases. An experimental…

  11. Chinese International Students' Experiences in American Higher Education Institutes: A Critical Review of the Literature

    ERIC Educational Resources Information Center

    Zhang-Wu, Qianqian

    2018-01-01

    Using database searches in ProQuest Sociology, Education Research Complete, ERIC, and Google Scholar, this landscape literature review provides research synthesis and analysis on research designs, underlying assumptions and findings of 21 recent peer-reviewed scholarly articles focusing on Chinese international students' experiences in American…

  12. The VERITAS Facility: A Virtual Environment Platform for Human Performance Research

    DTIC Science & Technology

    2016-01-01

    IAE The IAE supports the audio environment that users experience during the course of an experiment. This includes environmental sounds, user-to...future, we are looking towards a database-based system that would use MySQL or an equivalent product to store the large data sets and provide standard

  13. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  14. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images atmore » runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.« less

  15. Research by retrieving experiments.

    PubMed

    Blagosklonny, Mikhail V

    2007-06-01

    Newton did not discover that apples fall: the information was available prior to his gravitational hypothesis. Hypotheses can be tested not only by performing experiments but also by retrieving experiments from the literature (via PubMed, for example). Here I show how disconnected facts from known data, if properly connected, can generate novel predictions testable in turn by other published data. With examples from cell cycle, aging, cancer and other fields of biology and medicine, I discuss how new knowledge was and will be derived from old information. Millions of experiments have been already performed to test unrelated hypotheses and the results of those experiments are available to 'test' your hypotheses too. But most data (99% by some estimates) remain unpublished, because they were negative, seemed of low priority, or did not fit the story. Yet for other investigators those data may be valuable. The well-known story of Franklin and Watson is a case in point. By making preliminary data widely available, 'data-owners' will benefit most, receiving the credit for otherwise unused results. If posted (pre-published) on searchable databases, these data may fuel thousands of projects without the need for repetitive experiments. Enormous 'pre-published' databases coupled with Google-like search engines can change the structure of scientific research, and shrinking funding will make this inevitable.

  16. The Experience of Caregivers Living with Cancer Patients: A Systematic Review and Meta-Synthesis

    PubMed Central

    LeSeure, Peeranuch; Chongkham-ang, Supaporn

    2015-01-01

    The objectives of this meta-synthesis were to: (1) explore the experience of caregivers who were caring for cancer patients, including their perceptions and responses to the situation; and (2) describe the context and the phenomena relevant to the experience. Five databases were used: CINAHL, MEDLINE, Academic Search, Science Direct, and a Thai database known as the Thai Library Integrated System (ThaiLIS). Three sets of the context of the experience and the phenomena relevant to the experience were described. The contexts were (1) having a hard time dealing with emotional devastation; (2) knowing that the caregiving job was laborious; and (3) knowing that I was not alone. The phenomenon showed the progress of the caregivers’ thoughts and actions. A general phenomenon of the experience—balancing my emotion—applied to most of the caregivers; whereas, more specific phenomenon—keeping life as normal as possible and lifting life above the illness—were experienced by a lesser number of the caregivers. This review added a more thorough explanation of the issues involved in caregiving for cancer patients. A more comprehensive description of the experience of caregiving was described. The findings of this review can be used to guide clinical practice and policy formation in cancer patient care. PMID:26610573

  17. A carcinogenic potency database of the standardized results of animal bioassays

    PubMed Central

    Gold, Lois Swirsky; Sawyer, Charles B.; Magaw, Renae; Backman, Georganne M.; De Veciana, Margarita; Levinson, Robert; Hooper, N. Kim; Havender, William R.; Bernstein, Leslie; Peto, Richard; Pike, Malcolm C.; Ames, Bruce N.

    1984-01-01

    The preceding paper described our numerical index of carcinogenic potency, the TD50 and the statistical procedures adopted for estimating it from experimental data. This paper presents the Carcinogenic Potency Database, which includes results of about 3000 long-term, chronic experiments of 770 test compounds. Part II is a discussion of the sources of our data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. Part III is a guide to the plot of results presented in Part IV. A number of appendices are provided to facilitate use of the database. The plot includes information about chronic cancer tests in mammals, such as dose and other aspects of experimental protocol, histopathology and tumor incidence, TD50 and its statistical significance, dose response, author's opinion and literature reference. The plot readily permits comparisons of carcinogenic potency and many other aspects of cancer tests; it also provides quantitative information about negative tests. The range of carcinogenic potency is over 10 million-fold. PMID:6525996

  18. On Applicability of Tunable Filter Bank Based Feature for Ear Biometrics: A Study from Constrained to Unconstrained.

    PubMed

    Chowdhury, Debbrota Paul; Bakshi, Sambit; Guo, Guodong; Sa, Pankaj Kumar

    2017-11-27

    In this paper, an overall framework has been presented for person verification using ear biometric which uses tunable filter bank as local feature extractor. The tunable filter bank, based on a half-band polynomial of 14th order, extracts distinct features from ear images maintaining its frequency selectivity property. To advocate the applicability of tunable filter bank on ear biometrics, recognition test has been performed on available constrained databases like AMI, WPUT, IITD and unconstrained database like UERC. Experiments have been conducted applying tunable filter based feature extractor on subparts of the ear. Empirical experiments have been conducted with four and six subdivisions of the ear image. Analyzing the experimental results, it has been found that tunable filter moderately succeeds to distinguish ear features at par with the state-of-the-art features used for ear recognition. Accuracies of 70.58%, 67.01%, 81.98%, and 57.75% have been achieved on AMI, WPUT, IITD, and UERC databases through considering Canberra Distance as underlying measure of separation. The performances indicate that tunable filter is a candidate for recognizing human from ear images.

  19. Aerodynamic Database Development for the Hyper-X Airframe Integrated Scramjet Propulsion Experiments

    NASA Technical Reports Server (NTRS)

    Engelund, Walter C.; Holland, Scott D.; Cockrell, Charles E., Jr.; Bittner, Robert D.

    2000-01-01

    This paper provides an overview of the activities associated with the aerodynamic database which is being developed in support of NASA's Hyper-X scramjet flight experiments. Three flight tests are planned as part of the Hyper-X program. Each will utilize a small, nonrecoverable research vehicle with an airframe integrated scramjet propulsion engine. The research vehicles will be individually rocket boosted to the scramjet engine test points at Mach 7 and Mach 10. The research vehicles will then separate from the first stage booster vehicle and the scramjet engine test will be conducted prior to the terminal decent phase of the flight. An overview is provided of the activities associated with the development of the Hyper-X aerodynamic database, including wind tunnel test activities and parallel CFD analysis efforts for all phases of the Hyper-X flight tests. A brief summary of the Hyper-X research vehicle aerodynamic characteristics is provided, including the direct and indirect effects of the airframe integrated scramjet propulsion system operation on the basic airframe stability and control characteristics. Brief comments on the planned post flight data analysis efforts are also included.

  20. Database and interactive monitoring system for the photonics and electronics of RPC Muon Trigger in CMS experiment

    NASA Astrophysics Data System (ADS)

    Wiacek, Daniel; Kudla, Ignacy M.; Pozniak, Krzysztof T.; Bunkowski, Karol

    2005-02-01

    The main task of the RPC (Resistive Plate Chamber) Muon Trigger monitoring system design for the CMS (Compact Muon Solenoid) experiment (at LHC in CERN Geneva) is the visualization of data that includes the structure of electronic trigger system (e.g. geometry and imagery), the way of its processes and to generate automatically files with VHDL source code used for programming of the FPGA matrix. In the near future, the system will enable the analysis of condition, operation and efficiency of individual Muon Trigger elements, registration of information about some Muon Trigger devices and present previously obtained results in interactive presentation layer. A broad variety of different database and programming concepts for design of Muon Trigger monitoring system was presented in this article. The structure and architecture of the system and its principle of operation were described. One of ideas for building this system is use object-oriented programming and design techniques to describe real electronics systems through abstract object models stored in database and implement these models in Java language.

  1. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events

    PubMed Central

    Bem, Daryl; Tressoldi, Patrizio; Rabeyron, Thomas; Duggan, Michael

    2016-01-01

    In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 -10  with an effect size (Hedges’ g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 9, greatly exceeding the criterion value of 100 for “decisive evidence” in support of the experimental hypothesis. When DJB’s original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 -5, and the BF value is 3,853, again exceeding the criterion for “decisive evidence.” The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense “ p-hacking”—the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB’s original experiments (0.22) and the closely related “presentiment” experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi. PMID:26834996

  2. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  3. Current experiments in elementary particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohl, C.G.; Armstrong, F.E., Oyanagi, Y.; Dodder, D.C.

    1987-03-01

    This report contains summaries of 720 recent and current experiments in elementary particle physics (experiments that finished taking data before 1980 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Moscow Institute of Theoretical and Experimental Physics, Tokyo Institute of Nuclear Studies, KEK, LAMPF, Leningrad Nuclear Physics Institute, Saclay, Serpukhov, SIN, SLAC, and TRIUMF, and also experiments on proton decay. Instructions are given for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

  4. FARME DB: a functional antibiotic resistance element database

    PubMed Central

    Wallace, James C.; Port, Jesse A.; Smith, Marissa N.; Faustman, Elaine M.

    2017-01-01

    Antibiotic resistance (AR) is a major global public health threat but few resources exist that catalog AR genes outside of a clinical context. Current AR sequence databases are assembled almost exclusively from genomic sequences derived from clinical bacterial isolates and thus do not include many microbial sequences derived from environmental samples that confer resistance in functional metagenomic studies. These environmental metagenomic sequences often show little or no similarity to AR sequences from clinical isolates using standard classification criteria. In addition, existing AR databases provide no information about flanking sequences containing regulatory or mobile genetic elements. To help address this issue, we created an annotated database of DNA and protein sequences derived exclusively from environmental metagenomic sequences showing AR in laboratory experiments. Our Functional Antibiotic Resistant Metagenomic Element (FARME) database is a compilation of publically available DNA sequences and predicted protein sequences conferring AR as well as regulatory elements, mobile genetic elements and predicted proteins flanking antibiotic resistant genes. FARME is the first database to focus on functional metagenomic AR gene elements and provides a resource to better understand AR in the 99% of bacteria which cannot be cultured and the relationship between environmental AR sequences and antibiotic resistant genes derived from cultured isolates. Database URL: http://staff.washington.edu/jwallace/farme PMID:28077567

  5. TFBSshape: a motif database for DNA shape features of transcription factor binding sites.

    PubMed

    Yang, Lin; Zhou, Tianyin; Dror, Iris; Mathelier, Anthony; Wasserman, Wyeth W; Gordân, Raluca; Rohs, Remo

    2014-01-01

    Transcription factor binding sites (TFBSs) are most commonly characterized by the nucleotide preferences at each position of the DNA target. Whereas these sequence motifs are quite accurate descriptions of DNA binding specificities of transcription factors (TFs), proteins recognize DNA as a three-dimensional object. DNA structural features refine the description of TF binding specificities and provide mechanistic insights into protein-DNA recognition. Existing motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.

  6. TFBSshape: a motif database for DNA shape features of transcription factor binding sites

    PubMed Central

    Yang, Lin; Zhou, Tianyin; Dror, Iris; Mathelier, Anthony; Wasserman, Wyeth W.; Gordân, Raluca; Rohs, Remo

    2014-01-01

    Transcription factor binding sites (TFBSs) are most commonly characterized by the nucleotide preferences at each position of the DNA target. Whereas these sequence motifs are quite accurate descriptions of DNA binding specificities of transcription factors (TFs), proteins recognize DNA as a three-dimensional object. DNA structural features refine the description of TF binding specificities and provide mechanistic insights into protein–DNA recognition. Existing motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone. PMID:24214955

  7. New tools and methods for direct programmatic access to the dbSNP relational database

    PubMed Central

    Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260

  8. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    PubMed

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  9. VaProS: a database-integration approach for protein/genome information retrieval.

    PubMed

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-12-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein-protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts' knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/ .

  10. The Danish Fracture Database can monitor quality of fracture-related surgery, surgeons' experience level and extent of supervision.

    PubMed

    Andersen, Morten Jon; Gromov, Kiril; Brix, Michael; Troelsen, Anders

    2014-06-01

    The importance of supervision and of surgeons' level of experience in relation to patient outcome have been demonstrated in both hip fracture and arthroplasty surgery. The aim of this study was to describe the surgeons' experience level and the extent of supervision for: 1) fracture-related surgery in general; 2) the three most frequent primary operations and reoperations; and 3) primary operations during and outside regular working hours. A total of 9,767 surgical procedures were identified from the Danish Fracture Database (DFDB). Procedures were grouped based on the surgeons' level of experience, extent of supervision, type (primary, planned secondary or reoperation), classification (AO Müller), and whether they were performed during or outside regular hours. Interns and junior residents combined performed 46% of all procedures. A total of 90% of surgeries by interns were performed under supervision, whereas 32% of operations by junior residents were unsupervised. Supervision was absent in 14-16% and 22-33% of the three most frequent primary procedures and reoperations when performed by interns and junior residents, respectively. The proportion of unsupervised procedures by junior residents grew from 30% during to 40% (p < 0.001) outside regular hours. Interns and junior residents together performed almost half of all fracture-related surgery. The extent of supervision was generally high; however, a third of the primary procedures performed by junior residents were unsupervised. The extent of unsupervised surgery performed by junior residents was significantly higher outside regular hours. not relevant. The Danish Fracture Database ("Dansk Frakturdatabase") was approved by the Danish Data Protection Agency ID: 01321.

  11. CVcat: An interactive database on cataclysmic variables

    NASA Astrophysics Data System (ADS)

    Kube, J.; Gänsicke, B. T.; Euchner, F.; Hoffmann, B.

    2003-06-01

    CVcat is a database that contains published data on cataclysmic variables and related objects. Unlike in the existing online sources, the users are allowed to add data to the catalogue. The concept of an ``open catalogue'' approach is reviewed together with the experience from one year of public usage of CVcat. New concepts to be included in the upcoming AstroCat framework and the next CVcat implementation are presented. CVcat can be found at http://www.cvcat.org.

  12. Data warehousing in molecular biology.

    PubMed

    Schönbach, C; Kowalski-Saunders, P; Brusic, V

    2000-05-01

    In the business and healthcare sectors data warehousing has provided effective solutions for information usage and knowledge discovery from databases. However, data warehousing applications in the biological research and development (R&D) sector are lagging far behind. The fuzziness and complexity of biological data represent a major challenge in data warehousing for molecular biology. By combining experiences in other domains with our findings from building a model database, we have defined the requirements for data warehousing in molecular biology.

  13. The Tromso Infant Faces Database (TIF): Development, Validation and Application to Assess Parenting Experience on Clarity and Intensity Ratings.

    PubMed

    Maack, Jana K; Bohne, Agnes; Nordahl, Dag; Livsdatter, Lina; Lindahl, Åsne A W; Øvervoll, Morten; Wang, Catharina E A; Pfuhl, Gerit

    2017-01-01

    Newborns and infants are highly depending on successfully communicating their needs; e.g., through crying and facial expressions. Although there is a growing interest in the mechanisms of and possible influences on the recognition of facial expressions in infants, heretofore there exists no validated database of emotional infant faces. In the present article we introduce a standardized and freely available face database containing Caucasian infant face images from 18 infants 4 to 12 months old. The development and validation of the Tromsø Infant Faces (TIF) database is presented in Study 1. Over 700 adults categorized the photographs by seven emotion categories (happy, sad, disgusted, angry, afraid, surprised, neutral) and rated intensity, clarity and their valance. In order to examine the relevance of TIF, we then present its first application in Study 2, investigating differences in emotion recognition across different stages of parenthood. We found a small gender effect in terms of women giving higher intensity and clarity ratings than men. Moreover, parents of young children rated the images as clearer than all the other groups, and parents rated "neutral" expressions as more clearly and more intense. Our results suggest that caretaking experience provides an implicit advantage in the processing of emotional expressions in infant faces, especially for the more difficult, ambiguous expressions.

  14. An object model and database for functional genomics.

    PubMed

    Jones, Andrew; Hunt, Ela; Wastling, Jonathan M; Pizarro, Angel; Stoeckert, Christian J

    2004-07-10

    Large-scale functional genomics analysis is now feasible and presents significant challenges in data analysis, storage and querying. Data standards are required to enable the development of public data repositories and to improve data sharing. There is an established data format for microarrays (microarray gene expression markup language, MAGE-ML) and a draft standard for proteomics (PEDRo). We believe that all types of functional genomics experiments should be annotated in a consistent manner, and we hope to open up new ways of comparing multiple datasets used in functional genomics. We have created a functional genomics experiment object model (FGE-OM), developed from the microarray model, MAGE-OM and two models for proteomics, PEDRo and our own model (Gla-PSI-Glasgow Proposal for the Proteomics Standards Initiative). FGE-OM comprises three namespaces representing (i) the parts of the model common to all functional genomics experiments; (ii) microarray-specific components; and (iii) proteomics-specific components. We believe that FGE-OM should initiate discussion about the contents and structure of the next version of MAGE and the future of proteomics standards. A prototype database called RNA And Protein Abundance Database (RAPAD), based on FGE-OM, has been implemented and populated with data from microbial pathogenesis. FGE-OM and the RAPAD schema are available from http://www.gusdb.org/fge.html, along with a set of more detailed diagrams. RAPAD can be accessed by registration at the site.

  15. An emerging cyberinfrastructure for biodefense pathogen and pathogen–host data

    PubMed Central

    Zhang, C.; Crasta, O.; Cammer, S.; Will, R.; Kenyon, R.; Sullivan, D.; Yu, Q.; Sun, W.; Jha, R.; Liu, D.; Xue, T.; Zhang, Y.; Moore, M.; McGarvey, P.; Huang, H.; Chen, Y.; Zhang, J.; Mazumder, R.; Wu, C.; Sobral, B.

    2008-01-01

    The NIAID-funded Biodefense Proteomics Resource Center (RC) provides storage, dissemination, visualization and analysis capabilities for the experimental data deposited by seven Proteomics Research Centers (PRCs). The data and its publication is to support researchers working to discover candidates for the next generation of vaccines, therapeutics and diagnostics against NIAID's Category A, B and C priority pathogens. The data includes transcriptional profiles, protein profiles, protein structural data and host–pathogen protein interactions, in the context of the pathogen life cycle in vivo and in vitro. The database has stored and supported host or pathogen data derived from Bacillus, Brucella, Cryptosporidium, Salmonella, SARS, Toxoplasma, Vibrio and Yersinia, human tissue libraries, and mouse macrophages. These publicly available data cover diverse data types such as mass spectrometry, yeast two-hybrid (Y2H), gene expression profiles, X-ray and NMR determined protein structures and protein expression clones. The growing database covers over 23 000 unique genes/proteins from different experiments and organisms. All of the genes/proteins are annotated and integrated across experiments using UniProt Knowledgebase (UniProtKB) accession numbers. The web-interface for the database enables searching, querying and downloading at the level of experiment, group and individual gene(s)/protein(s) via UniProtKB accession numbers or protein function keywords. The system is accessible at http://www.proteomicsresource.org/. PMID:17984082

  16. Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory

    NASA Astrophysics Data System (ADS)

    Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre

    2016-05-01

    Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.

  17. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE PAGES

    Zerkin, V. V.; Pritychenko, B.

    2018-02-04

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  18. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    NASA Astrophysics Data System (ADS)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  19. Concepts and data model for a co-operative neurovascular database.

    PubMed

    Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J

    2001-08-01

    Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.

  20. ESO telbib: Linking In and Reaching Out

    NASA Astrophysics Data System (ADS)

    Grothkopf, U.; Meakins, S.

    2015-04-01

    Measuring an observatory's research output is an integral part of its science operations. Like many other observatories, ESO tracks scholarly papers that use observational data from ESO facilities and uses state-of-the-art tools to create, maintain, and further develop the Telescope Bibliography database (telbib). While telbib started out as a stand-alone tool mostly used to compile lists of papers, it has by now developed into a multi-faceted, interlinked system. The core of the telbib database is links between scientific papers and observational data generated by the La Silla Paranal Observatory residing in the ESO archive. This functionality has also been deployed for ALMA data. In addition, telbib reaches out to several other systems, including ESO press releases, the NASA ADS Abstract Service, databases at the CDS Strasbourg, and impact scores at Altmetric.com. We illustrate these features to show how the interconnected telbib system enhances the content of the database as well as the user experience.

  1. NCBI-compliant genome submissions: tips and tricks to save time and money.

    PubMed

    Pirovano, Walter; Boetzer, Marten; Derks, Martijn F L; Smit, Sandra

    2017-03-01

    Genome sequences nowadays play a central role in molecular biology and bioinformatics. These sequences are shared with the scientific community through sequence databases. The sequence repositories of the International Nucleotide Sequence Database Collaboration (INSDC, comprising GenBank, ENA and DDBJ) are the largest in the world. Preparing an annotated sequence in such a way that it will be accepted by the database is challenging because many validation criteria apply. In our opinion, it is an undesirable situation that researchers who want to submit their sequence need either a lot of experience or help from partners to get the job done. To save valuable time and money, we list a number of recommendations for people who want to submit an annotated genome to a sequence database, as well as for tool developers, who could help to ease the process. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. PRODORIC2: the bacterial gene regulation database in 2018

    PubMed Central

    Dudek, Christian-Alexander; Hartlich, Juliane; Brötje, David; Jahn, Dieter

    2018-01-01

    Abstract Bacteria adapt to changes in their environment via differential gene expression mediated by DNA binding transcriptional regulators. The PRODORIC2 database hosts one of the largest collections of DNA binding sites for prokaryotic transcription factors. It is the result of the thoroughly redesigned PRODORIC database. PRODORIC2 is more intuitive and user-friendly. Besides significant technical improvements, the new update offers more than 1000 new transcription factor binding sites and 110 new position weight matrices for genome-wide pattern searches with the Virtual Footprint tool. Moreover, binding sites deduced from high-throughput experiments were included. Data for 6 new bacterial species including bacteria of the Rhodobacteraceae family were added. Finally, a comprehensive collection of sigma- and transcription factor data for the nosocomial pathogen Clostridium difficile is now part of the database. PRODORIC2 is publicly available at http://www.prodoric2.de. PMID:29136200

  3. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zerkin, V. V.; Pritychenko, B.

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  4. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    NASA Astrophysics Data System (ADS)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  5. A precipitation database of station-based daily and monthly measurements for West Africa: Overview, quality control and harmonization

    NASA Astrophysics Data System (ADS)

    Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald

    2017-04-01

    West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the national meteorological services.

  6. Data Aggregation System: A system for information retrieval on demand over relational and non-relational distributed data sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ball, G.; Kuznetsov, V.; Evans, D.

    We present the Data Aggregation System, a system for information retrieval and aggregation from heterogenous sources of relational and non-relational data for the Compact Muon Solenoid experiment on the CERN Large Hadron Collider. The experiment currently has a number of organically-developed data sources, including front-ends to a number of different relational databases and non-database data services which do not share common data structures or APIs (Application Programming Interfaces), and cannot at this stage be readily converged. DAS provides a single interface for querying all these services, a caching layer to speed up access to expensive underlying calls and the abilitymore » to merge records from different data services pertaining to a single primary key.« less

  7. Experiment on building Sundanese lexical database based on WordNet

    NASA Astrophysics Data System (ADS)

    Dewi Budiwati, Sari; Nurani Setiawan, Novihana

    2018-03-01

    Sundanese language is the second biggest local language used in Indonesia. Currently, Sundanese language is rarely used since we have the Indonesian language in everyday conversation and as the national language. We built a Sundanese lexical database based on WordNet and Indonesian WordNet as an alternative way to preserve the language as one of local culture. WordNet was chosen because of Sundanese language has three levels of word delivery, called language code of conduct. Web user participant involved in this research for specifying Sundanese semantic relations, and an expert linguistic for validating the relations. The merge methodology was implemented in this experiment. Some words are equivalent with WordNet while another does not have its equivalence since some words are not exist in another culture.

  8. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  9. Physical Science Informatics: Providing Open Science Access to Microheater Array Boiling Experiment Data

    NASA Technical Reports Server (NTRS)

    McQuillen, John; Green, Robert D.; Henrie, Ben; Miller, Teresa; Chiaramonte, Fran

    2014-01-01

    The Physical Science Informatics (PSI) system is the next step in this an effort to make NASA sponsored flight data available to the scientific and engineering community, along with the general public. The experimental data, from six overall disciplines, Combustion Science, Fluid Physics, Complex Fluids, Fundamental Physics, and Materials Science, will present some unique challenges. Besides data in textual or numerical format, large portions of both the raw and analyzed data for many of these experiments are digital images and video, requiring large data storage requirements. In addition, the accessible data will include experiment design and engineering data (including applicable drawings), any analytical or numerical models, publications, reports, and patents, and any commercial products developed as a result of the research. This objective of paper includes the following: Present the preliminary layout (Figure 2) of MABE data within the PSI database. Obtain feedback on the layout. Present the procedure to obtain access to this database.

  10. The NIFFTE Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Qu, Hai; Niffte Collaboration

    2011-10-01

    The Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) will employ a novel, high granularity, pressurized Time Projection Chamber to measure fission cross-sections of the major actinides to high precision over a wide incident neutron energy range. These results will improve nuclear data accuracy and benefit the fuel cycle in the future. The NIFFTE data acquisition system (DAQ) has been designed and implemented on the prototype TPC. Lessons learned from engineering runs have been incorporated into some design changes that are being implemented before the next run cycle. A fully instrumented sextant of EtherDAQ cards (16 sectors, 496 channels) will be used for the next run cycle. The Maximum Integrated Data Acquisition System (MIDAS) has been chosen and customized to configure and run the experiment. It also meets the requirement for remote control and monitoring of the system. The integration of the MIDAS online database with the persistent PostgreSQL database has been implemented for experiment usage. The detailed design and current status of the DAQ system will be presented.

  11. Analyzing high energy physics data using database computing: Preliminary report

    NASA Technical Reports Server (NTRS)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  12. Evaluation of the Persistent Issues in History Laboratory for Virtual Field Experience (PIH-LVFE)

    ERIC Educational Resources Information Center

    Brush, Thomas; Saye, John; Kale, Ugur; Hur, Jung Won; Kohlmeier, Jada; Yerasimou, Theano; Guo, Lijiang; Symonette, Simone

    2009-01-01

    The Persistent Issues in History Laboratory for Virtual Field Experience (PIH-LVFE) combines a database of video cases of authentic classroom practices with multiple resources and tools to enable pre-service social studies teachers to virtually observe teachers implementing problem-based learning activities. In this paper, we present the results…

  13. Computer-Based Testing in the Medical Curriculum: A Decade of Experiences at One School

    ERIC Educational Resources Information Center

    McNulty, John; Chandrasekhar, Arcot; Hoyt, Amy; Gruener, Gregory; Espiritu, Baltazar; Price, Ron, Jr.

    2011-01-01

    This report summarizes more than a decade of experiences with implementing computer-based testing across a 4-year medical curriculum. Practical considerations are given to the fields incorporated within an item database and their use in the creation and analysis of examinations, security issues in the delivery and integrity of examinations,…

  14. The Experiences of Students without Disabilities in Inclusive Physical Education Classrooms: A Review of Literature

    ERIC Educational Resources Information Center

    Ruscitti, Robert Joseph; Thomas, Scott Gordon; Bentley, Danielle Christine

    2017-01-01

    The purpose of this literature review was to analyse studies of the experiences of students without disabilities (SWOD) in inclusive physical education (PE) classes. The literature published from 1975 to 2015 was compiled from three online databases (PsycInfo, Physical Education Index and ERIC). Included literature met inclusion criteria focussed…

  15. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    ERIC Educational Resources Information Center

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…

  16. Large-scale silviculture experiments of western Oregon and Washington.

    Treesearch

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  17. Resourcing Lab Experiments for New Ventures: The Potential of a Start-Up Database

    ERIC Educational Resources Information Center

    Pietrobon, Alberto

    2009-01-01

    This article responds to "Laboratory experiments as a tool in the empirical economic analysis of high-expectation start-ups" by Martin Curley and Piero Formica, published in the December 2008 issue of "Industry and Higher Education". Curley and Formica introduce a new concept for high-expectation start-ups, involving the use of "laboratory…

  18. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    NASA Astrophysics Data System (ADS)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  19. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417

  20. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/

  1. DBSecSys 2.0: a database of Burkholderia mallei and Burkholderia pseudomallei secretion systems.

    PubMed

    Memišević, Vesna; Kumar, Kamal; Zavaljevski, Nela; DeShazer, David; Wallqvist, Anders; Reifman, Jaques

    2016-09-20

    Burkholderia mallei and B. pseudomallei are the causative agents of glanders and melioidosis, respectively, diseases with high morbidity and mortality rates. B. mallei and B. pseudomallei are closely related genetically; B. mallei evolved from an ancestral strain of B. pseudomallei by genome reduction and adaptation to an obligate intracellular lifestyle. Although these two bacteria cause different diseases, they share multiple virulence factors, including bacterial secretion systems, which represent key components of bacterial pathogenicity. Despite recent progress, the secretion system proteins for B. mallei and B. pseudomallei, their pathogenic mechanisms of action, and host factors are not well characterized. We previously developed a manually curated database, DBSecSys, of bacterial secretion system proteins for B. mallei. Here, we report an expansion of the database with corresponding information about B. pseudomallei. DBSecSys 2.0 contains comprehensive literature-based and computationally derived information about B. mallei ATCC 23344 and literature-based and computationally derived information about B. pseudomallei K96243. The database contains updated information for 163 B. mallei proteins from the previous database and 61 additional B. mallei proteins, and new information for 281 B. pseudomallei proteins associated with 5 secretion systems, their 1,633 human- and murine-interacting targets, and 2,400 host-B. mallei interactions and 2,286 host-B. pseudomallei interactions. The database also includes information about 13 pathogenic mechanisms of action for B. mallei and B. pseudomallei secretion system proteins inferred from the available literature or computationally. Additionally, DBSecSys 2.0 provides details about 82 virulence attenuation experiments for 52 B. mallei secretion system proteins and 98 virulence attenuation experiments for 61 B. pseudomallei secretion system proteins. We updated the Web interface and data access layer to speed-up users' search of detailed information for orthologous proteins related to secretion systems of the two pathogens. The updates of DBSecSys 2.0 provide unique capabilities to access comprehensive information about secretion systems of B. mallei and B. pseudomallei. They enable studies and comparisons of corresponding proteins of these two closely related pathogens and their host-interacting partners. The database is available at http://dbsecsys.bhsai.org .

  2. T3SEdb: data warehousing of virulence effectors secreted by the bacterial Type III Secretion System.

    PubMed

    Tay, Daniel Ming Ming; Govindarajan, Kunde Ramamoorthy; Khan, Asif M; Ong, Terenze Yao Rui; Samad, Hanif M; Soh, Wei Wei; Tong, Minyan; Zhang, Fan; Tan, Tin Wee

    2010-10-15

    Effectors of Type III Secretion System (T3SS) play a pivotal role in establishing and maintaining pathogenicity in the host and therefore the identification of these effectors is important in understanding virulence. However, the effectors display high level of sequence diversity, therefore making the identification a difficult process. There is a need to collate and annotate existing effector sequences in public databases to enable systematic analyses of these sequences for development of models for screening and selection of putative novel effectors from bacterial genomes that can be validated by a smaller number of key experiments. Herein, we present T3SEdb http://effectors.bic.nus.edu.sg/T3SEdb, a specialized database of annotated T3SS effector (T3SE) sequences containing 1089 records from 46 bacterial species compiled from the literature and public protein databases. Procedures have been defined for i) comprehensive annotation of experimental status of effectors, ii) submission and curation review of records by users of the database, and iii) the regular update of T3SEdb existing and new records. Keyword fielded and sequence searches (BLAST, regular expression) are supported for both experimentally verified and hypothetical T3SEs. More than 171 clusters of T3SEs were detected based on sequence identity comparisons (intra-cluster difference up to ~60%). Owing to this high level of sequence diversity of T3SEs, the T3SEdb provides a large number of experimentally known effector sequences with wide species representation for creation of effector predictors. We created a reliable effector prediction tool, integrated into the database, to demonstrate the application of the database for such endeavours. T3SEdb is the first specialised database reported for T3SS effectors, enriched with manual annotations that facilitated systematic construction of a reliable prediction model for identification of novel effectors. The T3SEdb represents a platform for inclusion of additional annotations of metadata for future developments of sophisticated effector prediction models for screening and selection of putative novel effectors from bacterial genomes/proteomes that can be validated by a small number of key experiments.

  3. A database application for pre-processing, storage and comparison of mass spectra derived from patients and controls

    PubMed Central

    Titulaer, Mark K; Siccama, Ivar; Dekker, Lennard J; van Rijswijk, Angelique LCT; Heeren, Ron MA; Sillevis Smitt, Peter A; Luider, Theo M

    2006-01-01

    Background Statistical comparison of peptide profiles in biomarker discovery requires fast, user-friendly software for high throughput data analysis. Important features are flexibility in changing input variables and statistical analysis of peptides that are differentially expressed between patient and control groups. In addition, integration the mass spectrometry data with the results of other experiments, such as microarray analysis, and information from other databases requires a central storage of the profile matrix, where protein id's can be added to peptide masses of interest. Results A new database application is presented, to detect and identify significantly differentially expressed peptides in peptide profiles obtained from body fluids of patient and control groups. The presented modular software is capable of central storage of mass spectra and results in fast analysis. The software architecture consists of 4 pillars, 1) a Graphical User Interface written in Java, 2) a MySQL database, which contains all metadata, such as experiment numbers and sample codes, 3) a FTP (File Transport Protocol) server to store all raw mass spectrometry files and processed data, and 4) the software package R, which is used for modular statistical calculations, such as the Wilcoxon-Mann-Whitney rank sum test. Statistic analysis by the Wilcoxon-Mann-Whitney test in R demonstrates that peptide-profiles of two patient groups 1) breast cancer patients with leptomeningeal metastases and 2) prostate cancer patients in end stage disease can be distinguished from those of control groups. Conclusion The database application is capable to distinguish patient Matrix Assisted Laser Desorption Ionization (MALDI-TOF) peptide profiles from control groups using large size datasets. The modular architecture of the application makes it possible to adapt the application to handle also large sized data from MS/MS- and Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometry experiments. It is expected that the higher resolution and mass accuracy of the FT-ICR mass spectrometry prevents the clustering of peaks of different peptides and allows the identification of differentially expressed proteins from the peptide profiles. PMID:16953879

  4. A database application for pre-processing, storage and comparison of mass spectra derived from patients and controls.

    PubMed

    Titulaer, Mark K; Siccama, Ivar; Dekker, Lennard J; van Rijswijk, Angelique L C T; Heeren, Ron M A; Sillevis Smitt, Peter A; Luider, Theo M

    2006-09-05

    Statistical comparison of peptide profiles in biomarker discovery requires fast, user-friendly software for high throughput data analysis. Important features are flexibility in changing input variables and statistical analysis of peptides that are differentially expressed between patient and control groups. In addition, integration the mass spectrometry data with the results of other experiments, such as microarray analysis, and information from other databases requires a central storage of the profile matrix, where protein id's can be added to peptide masses of interest. A new database application is presented, to detect and identify significantly differentially expressed peptides in peptide profiles obtained from body fluids of patient and control groups. The presented modular software is capable of central storage of mass spectra and results in fast analysis. The software architecture consists of 4 pillars, 1) a Graphical User Interface written in Java, 2) a MySQL database, which contains all metadata, such as experiment numbers and sample codes, 3) a FTP (File Transport Protocol) server to store all raw mass spectrometry files and processed data, and 4) the software package R, which is used for modular statistical calculations, such as the Wilcoxon-Mann-Whitney rank sum test. Statistic analysis by the Wilcoxon-Mann-Whitney test in R demonstrates that peptide-profiles of two patient groups 1) breast cancer patients with leptomeningeal metastases and 2) prostate cancer patients in end stage disease can be distinguished from those of control groups. The database application is capable to distinguish patient Matrix Assisted Laser Desorption Ionization (MALDI-TOF) peptide profiles from control groups using large size datasets. The modular architecture of the application makes it possible to adapt the application to handle also large sized data from MS/MS- and Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometry experiments. It is expected that the higher resolution and mass accuracy of the FT-ICR mass spectrometry prevents the clustering of peaks of different peptides and allows the identification of differentially expressed proteins from the peptide profiles.

  5. Processing SPARQL queries with regular expressions in RDF databases

    PubMed Central

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  6. Processing SPARQL queries with regular expressions in RDF databases.

    PubMed

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  7. Data management and database framework for the MICE experiment

    NASA Astrophysics Data System (ADS)

    Martyniak, J.; Nebrensky, J. J.; Rajaram, D.; MICE Collaboration

    2017-10-01

    The international Muon Ionization Cooling Experiment (MICE) currently operating at the Rutherford Appleton Laboratory in the UK, is designed to demonstrate the principle of muon ionization cooling for application to a future Neutrino Factory or Muon Collider. We present the status of the framework for the movement and curation of both raw and reconstructed data. A raw data-mover has been designed to safely upload data files onto permanent tape storage as soon as they have been written out. The process has been automated, and checks have been built in to ensure the integrity of data at every stage of the transfer. The data processing framework has been recently redesigned in order to provide fast turnaround of reconstructed data for analysis. The automated reconstruction is performed on a dedicated machine in the MICE control room and any reprocessing is done at Tier-2 Grid sites. In conjunction with this redesign, a new reconstructed-data-mover has been designed and implemented. We also review the implementation of a robust database system that has been designed for MICE. The processing of data, whether raw or Monte Carlo, requires accurate knowledge of the experimental conditions. MICE has several complex elements ranging from beamline magnets to particle identification detectors to superconducting magnets. A Configuration Database, which contains information about the experimental conditions (magnet currents, absorber material, detector calibrations, etc.) at any given time has been developed to ensure accurate and reproducible simulation and reconstruction. A fully replicated, hot-standby database system has been implemented with a firewall-protected read-write master running in the control room, and a read-only slave running at a different location. The actual database is hidden from end users by a Web Service layer, which provides platform and programming language-independent access to the data.

  8. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org

    PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less

  9. Sixth plot of the carcinogenic potency database: results of animal bioassays published in the General Literature 1989 to 1990 and by the National Toxicology Program 1990 to 1993.

    PubMed Central

    Gold, L S; Manley, N B; Slone, T H; Garfinkel, G B; Ames, B N; Rohrbach, L; Stern, B R; Chow, K

    1995-01-01

    This paper presents two types of information from the Carcinogenic Potency Database (CPDB): (a) the sixth chronological plot of analyses of long-term carcinogenesis bioassays, and (b) an index to chemicals in all six plots, including a summary compendium of positivity and potency for each chemical (Appendix 14). The five earlier plots of the CPDB have appeared in this journal, beginning in 1984 (1-5). Including the plot in this paper, the CPDB reports results of 5002 experiments on 1230 chemicals. This paper includes bioassay results published in the general literature between January 1989 and December 1990, and in Technical Reports of the National Toxicology Program between January 1990 and June 1993. Analyses are included on 17 chemicals tested in nonhuman primates by the Laboratory of Chemical Pharmacology, National Cancer Institute. This plot presents results of 531 long-term, chronic experiments of 182 test compounds and includes the same information about each experiment in the same plot format as the earlier papers: the species and strain of test animal, the route and duration of compound administration, dose level and other aspects of experimental protocol, histopathology and tumor incidence, TD50 (carcinogenic potency) and its statistical significance, dose response, author's opinion about carcinogenicity, and literature citation. We refer the reader to the 1984 publications (1,6,7) for a detailed guide to the plot of the database, a complete description of the numerical index of carcinogenic potency, and a discussion of the sources of data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. The six plots of the CPDB are to be used together since results of individual experiments that were published earlier are not repeated. Appendix 14 is designed to facilitate access to results on all chemicals. References to the published papers that are the source of experimental data are reported in each of the published plots. For readers using the CPDB extensively, a combined plot is available of all results from the six separate plot papers, ordered alphabetically by chemical; the combined plot in printed form or on computer tape or diskette is available from the first author. A SAS database is also available. PMID:8741772

  10. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  11. Progress in gene targeting and gene therapy for retinitis pigmentosa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, G.J.; Humphries, M.M.; Erven, A.

    1994-09-01

    Previously, we localized disease genes involved in retinitis pigmentosa (RP), an inherited retinal degeneration, close to the rhodopsin and peripherin genes on 3q and 6p. Subsequently, we and others identified mutations in these genes in RP patients. Currently animal models for human retinopathies are being generated using gene targeting by homologous recombination in embryonic stem (ES) cells. Genomic clones for retinal genes including rhodopsin and peripherin have been obtained from a phage library carrying mouse DNA isogenic with the ES cell line (CC1.2). The peripherin clone has been sequenced to establish the genomic structure of the mouse gene. Targeting vectorsmore » for rhodopsin and peripherin including a neomycin cassette for positive selection and thymidine kinase genes enabling selection against random intergrants are under construction. Progress in vector construction will be presented. Simultaneously we are developing systems for delivery of gene therapies to retinal tissues utilizing replication-deficient adenovirus (Ad5). Efficacy of infection subsequent to various methods of intraocular injection and with varying viral titers is being assayed using an adenovirus construct containing a CMV promoter LacZ fusion as reporter and the range of tissues infected and the level of duration of LacZ expression monitored. Viral constructs with the LacZ reporter gene under the control of retinal specific promoters such as rhodopsin and IRBP cloned into pXCJL.1 are under construction. An update on developments in photoreceptor cell-directed expression of virally delivered genes will be presented.« less

  12. Cytokine secretion from human peripheral blood mononuclear cells cultured in vitro with metal particles.

    PubMed

    Cachinho, Sandra C P; Pu, Fanrong; Hunt, John A

    2013-04-01

    The failure of implanted medical devices can be associated with changes in the production of cytokines by cells of the immune system. Cytokines released by peripheral blood mononuclear cells upon contact with metal particles were quantified to understand their role in implantation intergration and their importance as messengers in the recruitment of T-lymphocytes at the implantation site. Opsonization was utilised to understand the influence of serum proteins on particle-induced cytokine production and release. Different metal compositions were used in the particulate format, Titanium (Ti), Titanium alloy (Ti6Al4V), and Stainless Steel 316L (SS), and were cultured in vitro with a mixed population of monocytes/macrophages and lymphocytes. The cells were also exposed to an exogenous stimulant mixture of phytohemagglutinin-P and interferon-gamma (IFN-γ) and opsonized particles with human serum. Interleukins, IL-1α, IL-1β, IL-2, IL-4, IL-6, IL-8, IFN-γ, and tumor necrosis factor-alpha (TNF-α) were investigated using enzyme-linked immunosorbent assay as they are an indicator of the inflammation evoked by particulate metals. It has been experimentally evidenced that metal particles induced higher amounts of IL-6 and IL-1 but very low amounts of TNF-α. T-lymphocyte activation was evaluated by the quantification of IL-2 and IFN-γ levels. The results showed that nonopsonized and opsonized metal particles did not induce the release of increased levels of IL-2 and IFN-γ. Copyright © 2013 Wiley Periodicals, Inc.

  13. Gateways to clinical trials.

    PubMed

    Bayés, M; Rabasseda, X; Prous, J R

    2007-12-01

    Gateways to Clinical Trials are a guide to the most recent clinical trials in current literature and congresses. The data in the following tables has been retrieved from the Clinical Trials Knowledge Area of Prous Science Intergrity, the drug discovery and development portal, http://integrity.prous.com. This issue focuses on the following selection of drugs: 249553, 2-Methoxyestradiol; Abatacept, Adalimumab, Adefovir dipivoxil, Agalsidase beta, Albinterferon alfa-2b, Aliskiren fumarate, Alovudine, Amdoxovir, Amlodipine besylate/atorvastatin calcium, Amrubicin hydrochloride, Anakinra, AQ-13, Aripiprazole, AS-1404, Asoprisnil, Atacicept, Atrasentan; Belimumab, Bevacizumab, Bortezomib, Bosentan, Botulinum toxin type B, Brivaracetam; Catumaxomab, Cediranib, Cetuximab, cG250, Ciclesonide, Cinacalcet hydrochloride, Curcumin, Cypher; Darbepoetin alfa, Denosumab, Dihydrexidine; Eicosapentaenoic acid/docosahexaenoic acid, Entecavir, Erlotinib hydrochloride, Escitalopram oxalate, Etoricoxib, Everolimus, Ezetimibe; Febuxostat, Fenspiride hydrochloride, Fondaparinux sodium; Gefitinib, Ghrelin (human), GSK-1562902A; HSV-tk/GCV; Iclaprim, Imatinib mesylate, Imexon, Indacaterol, Insulinotropin, ISIS-112989; L-Alanosine, Lapatinib ditosylate, Laropiprant; Methoxy polyethylene glycol-epoetin-beta, Mipomersen sodium, Motexafin gadolinium; Natalizumab, Nimotuzumab; OSC, Ozarelix; PACAP-38, Paclitaxel nanoparticles, Parathyroid Hormone-Related Protein-(1-36), Pasireotide, Pegfilgrastim, Peginterferon alfa-2a, Peginterferon alfa-2b, Pemetrexed disodium, Pertuzumab, Picoplatin, Pimecrolimus, Pitavastatin calcium, Plitidepsin; Ranelic acid distrontium salt, Ranolazine, Recombinant human relaxin H2, Regadenoson, RFB4(dsFv)-PE38, RO-3300074, Rosuvastatin calcium; SIR-Spheres, Solifenacin succinate, Sorafenib, Sunitinib malate; Tadalafil, Talabostat, Taribavirin hydrochloride, Taxus, Temsirolimus, Teriparatide, Tiotropium bromide, Tipifarnib, Tirapazamine, Tocilizumab; UCN-01, Ularitide, Uracil, Ustekinumab; V-260, Vandetanib, Vatalanib succinate, Vernakalant hydrochloride, Vorinostat; YM-155; Zileuton, Zoledronic acid monohydrate.

  14. Intergration of LiDAR Data with Aerial Imagery for Estimating Rooftop Solar Photovoltaic Potentials in City of Cape Town

    NASA Astrophysics Data System (ADS)

    Adeleke, A. K.; Smit, J. L.

    2016-06-01

    Apart from the drive to reduce carbon dioxide emissions by carbon-intensive economies like South Africa, the recent spate of electricity load shedding across most part of the country, including Cape Town has left electricity consumers scampering for alternatives, so as to rely less on the national grid. Solar energy, which is adequately available in most part of Africa and regarded as a clean and renewable source of energy, makes it possible to generate electricity by using photovoltaics technology. However, before time and financial resources are invested into rooftop solar photovoltaic systems in urban areas, it is important to evaluate the potential of the building rooftop, intended to be used in harvesting the solar energy. This paper presents methodologies making use of LiDAR data and other ancillary data, such as high-resolution aerial imagery, to automatically extract building rooftops in City of Cape Town and evaluate their potentials for solar photovoltaics systems. Two main processes were involved: (1) automatic extraction of building roofs using the integration of LiDAR data and aerial imagery in order to derive its' outline and areal coverage; and (2) estimating the global solar radiation incidence on each roof surface using an elevation model derived from the LiDAR data, in order to evaluate its solar photovoltaic potential. This resulted in a geodatabase, which can be queried to retrieve salient information about the viability of a particular building roof for solar photovoltaic installation.

  15. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    NASA Astrophysics Data System (ADS)

    Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn

    2016-04-01

    Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.

  16. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  17. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  18. The DNA Data Bank of Japan launches a new resource, the DDBJ Omics Archive of functional genomics experiments.

    PubMed

    Kodama, Yuichi; Mashima, Jun; Kaminuma, Eli; Gojobori, Takashi; Ogasawara, Osamu; Takagi, Toshihisa; Okubo, Kousaku; Nakamura, Yasukazu

    2012-01-01

    The DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp) maintains and provides archival, retrieval and analytical resources for biological information. The central DDBJ resource consists of public, open-access nucleotide sequence databases including raw sequence reads, assembly information and functional annotation. Database content is exchanged with EBI and NCBI within the framework of the International Nucleotide Sequence Database Collaboration (INSDC). In 2011, DDBJ launched two new resources: the 'DDBJ Omics Archive' (DOR; http://trace.ddbj.nig.ac.jp/dor) and BioProject (http://trace.ddbj.nig.ac.jp/bioproject). DOR is an archival database of functional genomics data generated by microarray and highly parallel new generation sequencers. Data are exchanged between the ArrayExpress at EBI and DOR in the common MAGE-TAB format. BioProject provides an organizational framework to access metadata about research projects and the data from the projects that are deposited into different databases. In this article, we describe major changes and improvements introduced to the DDBJ services, and the launch of two new resources: DOR and BioProject.

  19. Experience with ATLAS MySQL PanDA database service

    NASA Astrophysics Data System (ADS)

    Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  20. Query by forms: User-oriented relational database retrieving system and its application in analysis of experiment data

    NASA Astrophysics Data System (ADS)

    Skotniczny, Zbigniew

    1989-12-01

    The Query by Forms (QbF) system is a user-oriented interactive tool for querying large relational database with minimal queries difinition cost. The system was worked out under the assumption that user's time and effort for defining needed queries is the most severe bottleneck. The system may be applied in any Rdb/VMS databases system and is recommended for specific information systems of any project where end-user queries cannot be foreseen. The tool is dedicated to specialist of an application domain who have to analyze data maintained in database from any needed point of view, who do not need to know commercial databases languages. The paper presents the system developed as a compromise between its functionality and usability. User-system communication via a menu-driven "tree-like" structure of screen-forms which produces a query difinition and execution is discussed in detail. Output of query results (printed reports and graphics) is also discussed. Finally the paper shows one application of QbF to a HERA-project.

  1. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  2. Handwritten word preprocessing for database adaptation

    NASA Astrophysics Data System (ADS)

    Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic

    2013-01-01

    Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.

  3. Multiple Representations-Based Face Sketch-Photo Synthesis.

    PubMed

    Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie

    2016-11-01

    Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.

  4. The CMS DBS query language

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee

    2010-04-01

    The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.

  5. LMSD: LIPID MAPS structure database

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar

    2007-01-01

    The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933

  6. Missing Modality Transfer Learning via Latent Low-Rank Constraint.

    PubMed

    Ding, Zhengming; Shao, Ming; Fu, Yun

    2015-11-01

    Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage.

  7. Experience theory, or How desserts are like losses.

    PubMed

    Martin, Jolie M; Reimann, Martin; Norton, Michael I

    2016-11-01

    Although many experiments have explored risk preferences for money, few have systematically assessed risk preferences for everyday experiences. We propose a conceptual model and provide convergent evidence from 7 experiments to suggest that, in contrast to a typical "zero" reference point for choices on money, reference points for choices of experiences are set at more extreme outcomes, leading to concave utility for negative experiences but convex utility for positive experiences. As a result, people are more risk-averse for negative experiences such as disgusting foods-as for monetary gains-but more risk-seeking for positive experiences such as desserts-as for monetary losses. These risk preferences for experiences are robust to different methods of elicitation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Current experiments in elementary particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohl, C.G.; Armstrong, F.E.; Trippe, T.G.

    1989-09-01

    This report contains summaries of 736 current and recent experiments in elementary particle physics (experiments that finished taking data before 1982 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, Joint Institute for Nuclear Research (Dubna), KEK, LAMPF, Novosibirsk, PSI/SIN, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground experiments. Also given are instructions for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

  9. Searching molecular structure databases with tandem mass spectra using CSI:FingerID

    PubMed Central

    Dührkop, Kai; Shen, Huibin; Meusel, Marvin; Rousu, Juho; Böcker, Sebastian

    2015-01-01

    Metabolites provide a direct functional signature of cellular state. Untargeted metabolomics experiments usually rely on tandem MS to identify the thousands of compounds in a biological sample. Today, the vast majority of metabolites remain unknown. We present a method for searching molecular structure databases using tandem MS data of small molecules. Our method computes a fragmentation tree that best explains the fragmentation spectrum of an unknown molecule. We use the fragmentation tree to predict the molecular structure fingerprint of the unknown compound using machine learning. This fingerprint is then used to search a molecular structure database such as PubChem. Our method is shown to improve on the competing methods for computational metabolite identification by a considerable margin. PMID:26392543

  10. Conversion of a traditional image archive into an image resource on compact disc.

    PubMed Central

    Andrew, S M; Benbow, E W

    1997-01-01

    The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931

  11. Implementing and maintaining a researchable database from electronic medical records: a perspective from an academic family medicine department.

    PubMed

    Stewart, Moira; Thind, Amardeep; Terry, Amanda L; Chevendra, Vijaya; Marshall, J Neil

    2009-11-01

    Electronic medical records (EMRs) are posited as a tool for improving practice, policy and research in primary healthcare. This paper describes the Deliver Primary Healthcare Information (DELPHI) Project at the Department of Family Medicine at the University of Western Ontario, focusing on its development, current status and research potential in order to share experiences with researchers in similar contexts. The project progressed through four stages: (a) participant recruitment, (b) EMR software modification and implementation, (c) database creation and (d) data quality assessment. Currently, the DELPHI database holds more than two years of high-quality, de-identified data from 10 practices, with 30,000 patients and nearly a quarter of a million encounters.

  12. Proceedings of the International Conference (7th) on Machine Learning Held in Austin, Texas on 21-23 June 1990

    DTIC Science & Technology

    1990-06-23

    experiment was carried out for I(a2 I C1) = I(a2 C 2 ) = 1(a2 0 C3) = 0.33 both the databases . The number of sub-descriptions 1(a3 I C1) = I(a3 C 2 ) = l...for the second database is + 1(a2 I C2 )) + f(ep2 I C2) * as shown in table 4. The nurber of sub-descriptions (I(al I C2) + I(a2 I C2 )) is once...tends to degrade the performance [Chan 75]. path of C3]. Application 2 : The database is the 1984 Congres- sional voting pattern records consisting of

  13. Digital Image Quality And Interpretability: Database And Hardcopy Studies

    NASA Astrophysics Data System (ADS)

    Snyder, H. L.; Maddox, M. E.; Shedivy, D. I.; Turpin, J. A.; Burke, J. J.; Strickland, R. N.

    1982-02-01

    Two hundred fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photointer-preters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photointerpreter (judge) spent approximately two days extracting essential elements of information (EEls) from one degraded version of each scene at a constant Gaussian blur level (FWHM = 40, 84, or 322 Am). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories, based on the Shannon-Wiener measure of information, are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not statistically significant in the interpretation experiment, that of noise was significant, and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.

  14. Seeking health information on the web: positive hypothesis testing.

    PubMed

    Kayhan, Varol Onur

    2013-04-01

    The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. The field campaigns of the European Tracer Experiment (ETEX). overview and results

    NASA Astrophysics Data System (ADS)

    Nodop, K.; Connolly, R.; Girardi, F.

    As part of the European Tracer Experiment (ETEX) two successful atmospheric experiments were carried out in October and November, 1994. Perfluorocarbon (PFC) tracers were released into the atmosphere in Monterfil, Brittany, and air samples were taken at 168 stations in 17 European countries for 72 h after the release. Upper air tracer measurements were made from three aircraft. During the first experiment a westerly air flow transported the tracer plume north-eastwards across Europe. During the second release the flow was eastwards. The results from the ground sampling network allowed the determination of the cloud evolution as far as Sweden, Poland and Bulgaria. This demonstrated that the PFT technique can be successfully applied in long-range tracer experiments up to 2000 km. Typical background concentrations of the tracer used are around 5-7 fl ℓ -1 in ambient air. Concentrations in the plume ranged from 10 to above 200 fl/ℓ -1. The tracer release characteristics, the tracer concentrations at the ground and in upper air, the routine and additional meteorological observations at the ground level and in upper air, trajectories derived from constant-level balloons and the meteorological input fields for long-range transport models are assembled in the ETEX database. The ETEX database is accessible via the Internet. Here, an overview is given of the design of the experiment, the methods used and the data obtained.

  16. Success Probability Analysis for Shuttle Based Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Liou, Ying-Hsin Andrew

    1996-01-01

    Presented in this report are the results of data analysis of shuttle-based microgravity flight experiments. Potential factors were identified in the previous grant period, and in this period 26 factors were selected for data analysis. In this project, the degree of success was developed and used as the performance measure. 293 of the 391 experiments in Lewis Research Center Microgravity Database were assigned degrees of success. The frequency analysis and the analysis of variance were conducted to determine the significance of the factors that effect the experiment success.

  17. Experiment Document for 01-E077 Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)

    NASA Technical Reports Server (NTRS)

    Newman, Dava J.

    2003-01-01

    The Experiment Document (ED) serves the following purposes: a) It provides a vehicle for Principal Investigators (PIS) to formally specify the requirements for performing their experiments. b) It provides a technical Statement of Work (SOW). c) It provides experiment investigators and hardware developers with a convenient source of information about Human Life Sciences (HLS) requirements for the development and/or integration of flight experiment projects. d) It is the primary source of experiment specifications for the HLS Research Program Office (RPO). Inputs from this document will be placed into a controlled database that will be used to generate other documents.

  18. GLIMS Glacier Database: Status and Challenges

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Racoviteanu, A.; Khalsa, S. S.; Armstrong, R.

    2008-12-01

    GLIMS (Global Land Ice Measurements from Space) is an international initiative to map the world's glaciers and to build a GIS database that is usable via the World Wide Web. The GLIMS programme includes 70 institutions, and 25 Regional Centers (RCs), who analyze satellite imagery to map glaciers in their regions of expertise. The analysis results are collected at the National Snow and Ice Data Center (NSIDC) and ingested into the GLIMS Glacier Database. The database contains approximately 80 000 glacier outlines, half the estimated total on Earth. In addition, the database contains metadata on approximately 200 000 ASTER images acquired over glacierized terrain. Glacier data and the ASTER metadata can be viewed and searched via interactive maps at http://glims.org/. As glacier mapping with GLIMS has progressed, various hurdles have arisen that have required solutions. For example, the GLIMS community has formulated definitions for how to delineate glaciers with different complicated morphologies and how to deal with debris cover. Experiments have been carried out to assess the consistency of the database, and protocols have been defined for the RCs to follow in their mapping. Hurdles still remain. In June 2008, a workshop was convened in Boulder, Colorado to address issues such as mapping debris-covered glaciers, mapping ice divides, and performing change analysis using two different glacier inventories. This contribution summarizes the status of the GLIMS Glacier Database and steps taken to ensure high data quality.

  19. The Mouse Heart Attack Research Tool (mHART) 1.0 Database.

    PubMed

    DeLeon-Pennell, Kristine Y; Iyer, Rugmani Padmanabhan; Ma, Yonggang; Yabluchanskiy, Andriy; Zamilpa, Rogelio; Chiao, Ying Ann; Cannon, Presley; Cates, Courtney; Flynn, Elizabeth R; Halade, Ganesh V; de Castro Bras, Lisandra E; Lindsey, Merry L

    2018-05-18

    The generation of Big Data has enabled systems-level dissections into the mechanisms of cardiovascular pathology. Integration of genetic, proteomic, and pathophysiological variables across platforms and laboratories fosters discoveries through multidisciplinary investigations and minimizes unnecessary redundancy in research efforts. The Mouse Heart Attack Research Tool (mHART) consolidates a large dataset of over 10 years of experiments from a single laboratory for cardiovascular investigators to generate novel hypotheses and identify new predictive markers of progressive left ventricular remodeling following myocardial infarction (MI) in mice. We designed the mHART REDCap database using our own data to integrate cardiovascular community participation. We generated physiological, biochemical, cellular, and proteomic outputs from plasma and left ventricles obtained from post-MI and no MI (naïve) control groups. We included both male and female mice ranging in age from 3 to 36 months old. After variable collection, data underwent quality assessment for data curation (e.g. eliminate technical errors, check for completeness, remove duplicates, and define terms). Currently, mHART 1.0 contains >888,000 data points and includes results from >2,100 unique mice. Database performance was tested and an example provided to illustrate database utility. This report explains how the first version of the mHART database was established and provides researchers with a standard framework to aid in the integration of their data into our database or in the development of a similar database.

  20. International Space Station Science Information for Public Release on the NASA Web Portal

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Tate, Judy M.

    2009-01-01

    This document contains some of the descriptions of payload and experiment related to life support and habitation. These describe experiments that have or are scheduled to fly on the International Space Station. There are instructions, and descriptions of the fields that make up the database. The document is arranged in alphabetical order by the Payload

  1. The Ongoing Impact of the U.S. Fast Reactor Integral Experiments Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Michael A. Pope; Harold F. McFarlane

    2012-11-01

    The creation of a large database of integral fast reactor physics experiments advanced nuclear science and technology in ways that were unachievable by less capital intensive and operationally challenging approaches. They enabled the compilation of integral physics benchmark data, validated (or not) analytical methods, and provided assurance of future rector designs The integral experiments performed at Argonne National Laboratory (ANL) represent decades of research performed to support fast reactor design and our understanding of neutronics behavior and reactor physics measurements. Experiments began in 1955 with the Zero Power Reactor No. 3 (ZPR-3) and terminated with the Zero Power Physics Reactormore » (ZPPR, originally the Zero Power Plutonium Reactor) in 1990 at the former ANL-West site in Idaho, which is now part of the Idaho National Laboratory (INL). Two additional critical assemblies, ZPR-6 and ZPR-9, operated at the ANL-East site in Illinois. A total of 128 fast reactor assemblies were constructed with these facilities [1]. The infrastructure and measurement capabilities are too expensive to be replicated in the modern era, making the integral database invaluable as the world pushes ahead with development of liquid metal cooled reactors.« less

  2. HOPE: An On-Line Piloted Handling Qualities Experiment Data Book

    NASA Technical Reports Server (NTRS)

    Jackson, E. B.; Proffitt, Melissa S.

    2010-01-01

    A novel on-line database for capturing most of the information obtained during piloted handling qualities experiments (either flight or simulated) is described. The Hyperlinked Overview of Piloted Evaluations (HOPE) web application is based on an open-source object-oriented Web-based front end (Ruby-on-Rails) that can be used with a variety of back-end relational database engines. The hyperlinked, on-line data book approach allows an easily-traversed way of looking at a variety of collected data, including pilot ratings, pilot information, vehicle and configuration characteristics, test maneuvers, and individual flight test cards and repeat runs. It allows for on-line retrieval of pilot comments, both audio and transcribed, as well as time history data retrieval and video playback. Pilot questionnaires are recorded as are pilot biographies. Simple statistics are calculated for each selected group of pilot ratings, allowing multiple ways to aggregate the data set (by pilot, by task, or by vehicle configuration, for example). Any number of per-run or per-task metrics can be captured in the database. The entire run metrics dataset can be downloaded in comma-separated text for further analysis off-line. It is expected that this tool will be made available upon request

  3. Multireference - Møller-Plesset Perturbation Theory Results on Levels and Transition Rates in Al-like Ions of Iron Group Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santana, J A; Ishikawa, Y; Tr�abert, E

    2009-02-26

    Ground configuration and low-lying levels of Al-like ions contribute to a variety of laboratory and solar spectra, but the available information in databases are neither complete not necessarily correct. We have performed multireference Moeller-Plesset perturbation theory calculations that approach spectroscopic accuracy in order to check the information that databases hold on the 40 lowest levels of Al-Like ions of iron group elements (K through Ge), and to provide input for the interpretation of concurrent experiments. Our results indicate problems of the database holdings on the levels of the lowest quartet levels in the lighter elements of the range studied. Themore » results of our calculations of the decay rates of five long-lived levels (3s{sup 2}3p {sup 2}p{sup o}{sub 3/2}, 3s3p{sup 2} {sup 4}P{sup o} J and 3s3p3d {sup 4}F{sup o}{sub 9/2}) are compared with lifetime data from beam-foil, electron beam ion trap and heavy-ion storage ring experiments.« less

  4. Determination of Detection Limits and Quantitation Limits for Compounds in a Database of GC/MS by FUMI Theory

    PubMed Central

    Nakashima, Shinya; Hayashi, Yuzuru

    2016-01-01

    The aim of this paper is to propose a stochastic method for estimating the detection limits (DLs) and quantitation limits (QLs) of compounds registered in a database of a GC/MS system and prove its validity with experiments. The approach described in ISO 11843 Part 7 is adopted here as an estimation means of DL and QL, and the decafluorotriphenylphosphine (DFTPP) tuning and retention time locking are carried out for adjusting the system. Coupled with the data obtained from the system adjustment experiments, the information (noise and signal of chromatograms and calibration curves) stored in the database is used for the stochastic estimation, dispensing with the repetition measurements. Of sixty-six pesticides, the DL values obtained by the ISO method were compared with those from the statistical approach and the correlation between them was observed to be excellent with the correlation coefficient of 0.865. The accuracy of the method proposed was also examined and concluded to be satisfactory as well. The samples used are commercial products of pesticides mixtures and the uncertainty from sample preparation processes is not taken into account. PMID:27162706

  5. The Chinese Lexicon Project: A megastudy of lexical decision performance for 25,000+ traditional Chinese two-character compound words.

    PubMed

    Tse, Chi-Shing; Yap, Melvin J; Chan, Yuen-Lai; Sze, Wei Ping; Shaoul, Cyrus; Lin, Dan

    2017-08-01

    Using a megastudy approach, we developed a database of lexical variables and lexical decision reaction times and accuracy rates for more than 25,000 traditional Chinese two-character compound words. Each word was responded to by about 33 native Cantonese speakers in Hong Kong. This resource provides a valuable adjunct to influential mega-databases, such as the Chinese single-character, English, French, and Dutch Lexicon Projects. Three analyses were conducted to illustrate the potential uses of the database. First, we compared the proportion of variance in lexical decision performance accounted for by six word frequency measures and established that the best predictor was Cai and Brysbaert's (PLoS One, 5, e10729, 2010) contextual diversity subtitle frequency. Second, we ran virtual replications of three previously published lexical decision experiments and found convergence between the original experiments and the present megastudy. Finally, we conducted item-level regression analyses to examine the effects of theoretically important lexical variables in our normative data. This is the first publicly available large-scale repository of behavioral responses pertaining to Chinese two-character compound word processing, which should be of substantial interest to psychologists, linguists, and other researchers.

  6. Evaluating and improving pressure ulcer care: the VA experience with administrative data.

    PubMed

    Berlowitz, D R; Halpern, J

    1997-08-01

    A number of state initiatives are using databases originally developed for nursing home reimbursements to assess the quality of care. Since 1991 the Department of Veterans Affairs (VA; Washington, DC) has been using a long term care administrative database to calculate facility-specific rates of pressure ulcer development. This information is disseminated to all 140 long term care facilities as part of a quality assessment and improvement program. Assessments are performed on all long term care residents on April 1 and October 1, as well as at the time of admission or transfer to a long term care unit. Approximately 18,000 long term care residents are evaluated in each six-month period; the VA rate of pressure ulcer development is approximately 3.5%. Reports of the rates of pressure ulcer development are then disseminated to all facilities, generally within two months of the assessment date. The VA's more than five years' experience in using administrative data to assess outcomes for long term care highlights several important issues that should be considered when using outcome measures based on administrative data. These include the importance of carefully selecting the outcome measure, the need to consider the structure of the database, the role of case-mix adjustment, strategies for reporting rates to small facilities, and methods for information dissemination. Attention to these issues will help ensure that results from administrative databases lead to improvements in the quality of care.

  7. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  8. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    NASA Astrophysics Data System (ADS)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  9. APPLYING DATA MINING APPROACHES TO FURTHER UNDERSTANDING CHEMICAL EFFECTS ON BIOLOGICAL SYSTEMS.

    EPA Science Inventory

    Correlations of bioassays and toxicity cannot be assessed at the compound level with the current toxicity database. Further work is planned for gaining molecular level knoweldge from these experiments.

  10. POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.

    PubMed

    Dayringer, H E; Sammons, S A

    1991-04-01

    Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.

  11. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface.

    PubMed

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-08-25

    Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms.

  12. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface

    PubMed Central

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-01-01

    Background Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. Results We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. Conclusion dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms. PMID:19706156

  13. Adolescents' and young adults' transition experiences when transferring from paediatric to adult care: a qualitative metasynthesis.

    PubMed

    Fegran, Liv; Hall, Elisabeth O C; Uhrenfeldt, Lisbeth; Aagaard, Hanne; Ludvigsen, Mette Spliid

    2014-01-01

    The objective of this study was to synthesize qualitative studies of how adolescents and young adults with chronic diseases experience the transition from paediatric to adult hospital care. The review is designed as a qualitative metasynthesis and is following Sandelowski and Barroso's guidelines for synthesizing qualitative research. Literature searches were conducted in the databases PubMed, Ovid, Scopus, Cumulative Index to Nursing and Allied Health Literature (CINAHL), ISI Web of Science, and Nordic and German databases covering the period from 1999 to November 2010. In addition, forward citation snowball searching was conducted in the databases Ovid, CINAHL, ISI Web of Science, Scopus and Google Scholar. Of the 1143 records screened, 18 studies were included. Inclusion criteria were qualitative studies in English, German or Nordic languages on adolescents' and young adults' transition experiences when transferring from paediatric to adult care. There was no age limit, provided the focus was on the actual transfer process and participants had a chronic somatic disease. The studies were appraised as suitable for inclusion using a published appraisal tool. Data were analyzed into metasummaries and a metasynthesis according to established guidelines for synthesis of qualitative research. Four themes illustrating experiences of loss of familiar surroundings and relationships combined with insecurity and a feeling of being unprepared for what was ahead were identified: facing changes in significant relationships, moving from a familiar to an unknown ward culture, being prepared for transfer and achieving responsibility. Young adults' transition experiences seem to be comparable across diagnoses. Feelings of not belonging and of being redundant during the transfer process are striking. Health care professionals' appreciation of young adults' need to be acknowledged and valued as competent collaborators in their own transfer is crucial, and may protect them from additional health problems during a vulnerable phase. Further research including participants across various cultures and health care systems is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. BloodSpot: a database of gene expression profiles and transcriptional programs for healthy and malignant haematopoiesis

    PubMed Central

    Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T.

    2016-01-01

    Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan–Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. PMID:26507857

  15. Challenges and Experiences of Building Multidisciplinary Datasets across Cultures

    NASA Astrophysics Data System (ADS)

    Jamiyansharav, K.; Laituri, M.; Fernandez-Gimenez, M.; Fassnacht, S. R.; Venable, N. B. H.; Allegretti, A. M.; Reid, R.; Baival, B.; Jamsranjav, C.; Ulambayar, T.; Linn, S.; Angerer, J.

    2017-12-01

    Efficient data sharing and management are key challenges to multidisciplinary scientific research. These challenges are further complicated by adding a multicultural component. We address the construction of a complex database for social-ecological analysis in Mongolia. Funded by the National Science Foundation (NSF) Dynamics of Coupled Natural and Human (CNH) Systems, the Mongolian Rangelands and Resilience (MOR2) project focuses on the vulnerability of Mongolian pastoral systems to climate change and adaptive capacity. The MOR2 study spans over three years of fieldwork in 36 paired districts (Soum) from 18 provinces (Aimag) of Mongolia that covers steppe, mountain forest steppe, desert steppe and eastern steppe ecological zones. Our project team is composed of hydrologists, social scientists, geographers, and ecologists. The MOR2 database includes multiple ecological, social, meteorological, geospatial and hydrological datasets, as well as archives of original data and survey in multiple formats. Managing this complex database requires significant organizational skills, attention to detail and ability to communicate within collective team members from diverse disciplines and across multiple institutions in the US and Mongolia. We describe the database's rich content, organization, structure and complexity. We discuss lessons learned, best practices and recommendations for complex database management, sharing, and archiving in creating a cross-cultural and multi-disciplinary database.

  16. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    PubMed

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2018-06-01

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  17. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    PubMed Central

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T.; Arenillas, David; Zhao, Xiaobei; Valen, Eivind; Yusuf, Dimas; Lenhard, Boris; Wasserman, Wyeth W.; Sandelin, Albin

    2010-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database to date: the database now holds 457 non-redundant, curated profiles. The new entries include the first batch of profiles derived from ChIP-seq and ChIP-chip whole-genome binding experiments, and 177 yeast TF binding profiles. The introduction of a yeast division brings the convenience of JASPAR to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature. A curated catalog of mammalian TFs is provided, extending the use of the JASPAR profiles to additional TFs belonging to the same structural family. The changes in the database set the system ready for more rapid acquisition of new high-throughput data sources. Additionally, three new special collections provide matrix profile data produced by recent alternative high-throughput approaches. PMID:19906716

  18. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  19. An intergrated image matching algorithm and its application in the production of lunar map based on Chang'E-2 images

    NASA Astrophysics Data System (ADS)

    Wang, F.; Ren, X.; Liu, J.; Li, C.

    2012-12-01

    An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.

  20. Morphologies of Primary Silicon in Hypereutectic Al-Si Alloys: Phase-Field Simulation Supported by Key Experiments

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Wei, Ming; Zhang, Lijun; Du, Yong

    2016-04-01

    We realized a three-dimensional visualization of the morphology evolution and the growth behavior of the octahedral primary silicon in hypereutectic Al-20wtpctSi alloy during solidification in a real length scale by utilizing the phase-field simulation coupled with CALPHAD databases, and supported by key experiments. Moreover, through two-dimensional cut of the octahedral primary silicon at random angles, different morphologies observed in experiments, including triangle, square, trapezoid, rhombic, pentagon, and hexagon, were well reproduced.

  1. Genelab: Scientific Partnerships and an Open-Access Database to Maximize Usage of Omics Data from Space Biology Experiments

    NASA Technical Reports Server (NTRS)

    Reinsch, S. S.; Galazka, J..; Berrios, D. C; Chakravarty, K.; Fogle, H.; Lai, S.; Bokyo, V.; Timucin, L. R.; Tran, P.; Skidmore, M.

    2016-01-01

    NASA's mission includes expanding our understanding of biological systems to improve life on Earth and to enable long-duration human exploration of space. The GeneLab Data System (GLDS) is NASA's premier open-access omics data platform for biological experiments. GLDS houses standards-compliant, high-throughput sequencing and other omics data from spaceflight-relevant experiments. The GeneLab project at NASA-Ames Research Center is developing the database, and also partnering with spaceflight projects through sharing or augmentation of experiment samples to expand omics analyses on precious spaceflight samples. The partnerships ensure that the maximum amount of data is garnered from spaceflight experiments and made publically available as rapidly as possible via the GLDS. GLDS Version 1.0, went online in April 2015. Software updates and new data releases occur at least quarterly. As of October 2016, the GLDS contains 80 datasets and has search and download capabilities. Version 2.0 is slated for release in September of 2017 and will have expanded, integrated search capabilities leveraging other public omics databases (NCBI GEO, PRIDE, MG-RAST). Future versions in this multi-phase project will provide a collaborative platform for omics data analysis. Data from experiments that explore the biological effects of the spaceflight environment on a wide variety of model organisms are housed in the GLDS including data from rodents, invertebrates, plants and microbes. Human datasets are currently limited to those with anonymized data (e.g., from cultured cell lines). GeneLab ensures prompt release and open access to high-throughput genomics, transcriptomics, proteomics, and metabolomics data from spaceflight and ground-based simulations of microgravity, radiation or other space environment factors. The data are meticulously curated to assure that accurate experimental and sample processing metadata are included with each data set. GLDS download volumes indicate strong interest of the scientific community in these data. To date GeneLab has partnered with multiple experiments including two plant (Arabidopsis thaliana) experiments, two mice experiments, and several microbe experiments. GeneLab optimized protocols in the rodent partnerships for maximum yield of RNA, DNA and protein from tissues harvested and preserved during the SpaceX-4 mission, as well as from tissues from mice that were frozen intact during spaceflight and later dissected on the ground. Analysis of GeneLab data will contribute fundamental knowledge of how the space environment affects biological systems, and as well as yield terrestrial benefits resulting from mitigation strategies to prevent effects observed during exposure to space environments.

  2. Differentiating signals to make biological sense - A guide through databases for MS-based non-targeted metabolomics.

    PubMed

    Gil de la Fuente, Alberto; Grace Armitage, Emily; Otero, Abraham; Barbas, Coral; Godzien, Joanna

    2017-09-01

    Metabolite identification is one of the most challenging steps in metabolomics studies and reflects one of the greatest bottlenecks in the entire workflow. The success of this step determines the success of the entire research, therefore the quality at which annotations are given requires special attention. A variety of tools and resources are available to aid metabolite identification or annotation, offering different and often complementary functionalities. In preparation for this article, almost 50 databases were reviewed, from which 17 were selected for discussion, chosen for their online ESI-MS functionality. The general characteristics and functions of each database is discussed in turn, considering the advantages and limitations of each along with recommendations for optimal use of each tool, as derived from experiences encountered at the Centre for Metabolomics and Bioanalysis (CEMBIO) in Madrid. These databases were evaluated considering their utility in non-targeted metabolomics, including aspects such as identifier assignment, structural assignment and interpretation of results. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. The European Radiobiology Archives (ERA)--content, structure and use illustrated by an example.

    PubMed

    Gerber, G B; Wick, R R; Kellerer, A M; Hopewell, J W; Di Majo, V; Dudoignon, N; Gössner, W; Stather, J

    2006-01-01

    The European Radiobiology Archives (ERA), supported by the European Commission and the European Late Effect Project Group (EULEP), together with the US National Radiobiology Archives (NRA) and the Japanese Radiobiology Archives (JRA) have collected all information still available on long-term animal experiments, including some selected human studies. The archives consist of a database in Microsoft Access, a website, databases of references and information on the use of the database. At present, the archives contain a description of the exposure conditions, animal strains, etc. from approximately 350,000 individuals; data on survival and pathology are available from approximately 200,000 individuals. Care has been taken to render pathological diagnoses compatible among different studies and to allow the lumping of pathological diagnoses into more general classes. 'Forms' in Access with an underlying computer code facilitate the use of the database. This paper describes the structure and content of the archives and illustrates an example for a possible analysis of such data.

  4. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  5. Transactional Database Transformation and Its Application in Prioritizing Human Disease Genes

    PubMed Central

    Xiang, Yang; Payne, Philip R.O.; Huang, Kun

    2013-01-01

    Binary (0,1) matrices, commonly known as transactional databases, can represent many application data, including gene-phenotype data where “1” represents a confirmed gene-phenotype relation and “0” represents an unknown relation. It is natural to ask what information is hidden behind these “0”s and “1”s. Unfortunately, recent matrix completion methods, though very effective in many cases, are less likely to infer something interesting from these (0,1)-matrices. To answer this challenge, we propose IndEvi, a very succinct and effective algorithm to perform independent-evidence-based transactional database transformation. Each entry of a (0,1)-matrix is evaluated by “independent evidence” (maximal supporting patterns) extracted from the whole matrix for this entry. The value of an entry, regardless of its value as 0 or 1, has completely no effect for its independent evidence. The experiment on a gene-phenotype database shows that our method is highly promising in ranking candidate genes and predicting unknown disease genes. PMID:21422495

  6. Current Status of NASDA Terminology Database

    NASA Astrophysics Data System (ADS)

    Kato, Akira

    2002-01-01

    NASDA Terminology Database System provides the English and Japanese terms, abbreviations, definition and reference documents. Recent progress includes a service to provide abbreviation data from the NASDA Home Page, and publishing a revised NASDA bilingual dictionary. Our next efforts to improve the system are (1) to combine our data with the data of NASA THESAURUS, (2) to add terms from new academic and engineering fields that have begun to have relations with space activities, and (3) to revise the NASDA Definition List. To combine our data with the NASA THESAURUS database we must consider the difference between the database concepts. Further effort to select adequate terms is thus required. Terms must be added from other fields to deal with microgravity experiments, human factors and so on. Some examples of new terms to be added have been collected. To revise the NASDA terms definition list, NASA and ESA definition lists were surveyed and a general concept to revise the NASDA definition list was proposed. I expect these activities will contribute to the IAA dictionary.

  7. The MAJORANA Parts Tracking Database

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  8. Building an Ontology-driven Database for Clinical Immune Research

    PubMed Central

    Ma, Jingming

    2006-01-01

    The clinical researches of immune response usually generate a huge amount of biomedical testing data over a certain period of time. The user-friendly data management systems based on the relational database will help immunologists/clinicians to fully manage the data. On the other hand, the same biological assays such as ELISPOT and flow cytometric assays are involved in immunological experiments no matter of different study purposes. The reuse of biological knowledge is one of driving forces behind this ontology-driven data management. Therefore, an ontology-driven database will help to handle different clinical immune researches and help immunologists/clinicians easily understand the immunological data from each other. We will discuss some outlines for building an ontology-driven data management for clinical immune researches (ODMim). PMID:17238637

  9. A georeferenced Landsat digital database for forest insect-damage assessment

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Nelson, R. F.; Dottavio, C. L.

    1985-01-01

    In 1869, the gypsy moth caterpillar was introduced in the U.S. in connection with the experiments of a French scientist. Throughout the insect's period of establishment, gypsy moth populations have periodically increased to epidemic proportions. For programs concerned with preventing the insect's spread, it would be highly desirable to be able to employ a survey technique which could provide timely, accurate, and standardized assessments at a reasonable cost. A project was, therefore, initiated with the aim to demonstrate the usefulness of satellite remotely sensed data for monitoring the insect defoliation of hardwood forests in Pennsylvania. A major effort within this project involved the development of a map-registered Landsat digital database. A complete description of the database developed is provided along with information regarding the employed data management system.

  10. Indexing Temporal XML Using FIX

    NASA Astrophysics Data System (ADS)

    Zheng, Tiankun; Wang, Xinjun; Zhou, Yingchun

    XML has become an important criterion for description and exchange of information. It is of practical significance to introduce the temporal information on this basis, because time has penetrated into all walks of life as an important property information .Such kind of database can track document history and recover information to state of any time before, and is called Temporal XML database. We advise a new feature vector on the basis of FIX which is a feature-based XML index, and build an index on temporal XML database using B+ tree, donated TFIX. We also put forward a new query algorithm upon it for temporal query. Our experiments proved that this index has better performance over other kinds of XML indexes. The index can satisfy all TXPath queries with depth up to K(>0).

  11. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valassi, A.; /CERN; Bartoldus, R.

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less

  12. Recent Experiments with INQUERY

    DTIC Science & Technology

    1995-11-01

    were conducted with version of the INQUERY information retrieval system INQUERY is based on the Bayesian inference network retrieval model It is...corpus based query expansion For TREC a subset of of the adhoc document set was used to build the InFinder database None of the...experiments that showed signi cant improvements in retrieval eectiveness when document rankings based on the entire document text are combined with

  13. Pathway Distiller - multisource biological pathway consolidation

    PubMed Central

    2012-01-01

    Background One method to understand and evaluate an experiment that produces a large set of genes, such as a gene expression microarray analysis, is to identify overrepresentation or enrichment for biological pathways. Because pathways are able to functionally describe the set of genes, much effort has been made to collect curated biological pathways into publicly accessible databases. When combining disparate databases, highly related or redundant pathways exist, making their consolidation into pathway concepts essential. This will facilitate unbiased, comprehensive yet streamlined analysis of experiments that result in large gene sets. Methods After gene set enrichment finds representative pathways for large gene sets, pathways are consolidated into representative pathway concepts. Three complementary, but different methods of pathway consolidation are explored. Enrichment Consolidation combines the set of the pathways enriched for the signature gene list through iterative combining of enriched pathways with other pathways with similar signature gene sets; Weighted Consolidation utilizes a Protein-Protein Interaction network based gene-weighting approach that finds clusters of both enriched and non-enriched pathways limited to the experiments' resultant gene list; and finally the de novo Consolidation method uses several measurements of pathway similarity, that finds static pathway clusters independent of any given experiment. Results We demonstrate that the three consolidation methods provide unified yet different functional insights of a resultant gene set derived from a genome-wide profiling experiment. Results from the methods are presented, demonstrating their applications in biological studies and comparing with a pathway web-based framework that also combines several pathway databases. Additionally a web-based consolidation framework that encompasses all three methods discussed in this paper, Pathway Distiller (http://cbbiweb.uthscsa.edu/PathwayDistiller), is established to allow researchers access to the methods and example microarray data described in this manuscript, and the ability to analyze their own gene list by using our unique consolidation methods. Conclusions By combining several pathway systems, implementing different, but complementary pathway consolidation methods, and providing a user-friendly web-accessible tool, we have enabled users the ability to extract functional explanations of their genome wide experiments. PMID:23134636

  14. Pathway Distiller - multisource biological pathway consolidation.

    PubMed

    Doderer, Mark S; Anguiano, Zachry; Suresh, Uthra; Dashnamoorthy, Ravi; Bishop, Alexander J R; Chen, Yidong

    2012-01-01

    One method to understand and evaluate an experiment that produces a large set of genes, such as a gene expression microarray analysis, is to identify overrepresentation or enrichment for biological pathways. Because pathways are able to functionally describe the set of genes, much effort has been made to collect curated biological pathways into publicly accessible databases. When combining disparate databases, highly related or redundant pathways exist, making their consolidation into pathway concepts essential. This will facilitate unbiased, comprehensive yet streamlined analysis of experiments that result in large gene sets. After gene set enrichment finds representative pathways for large gene sets, pathways are consolidated into representative pathway concepts. Three complementary, but different methods of pathway consolidation are explored. Enrichment Consolidation combines the set of the pathways enriched for the signature gene list through iterative combining of enriched pathways with other pathways with similar signature gene sets; Weighted Consolidation utilizes a Protein-Protein Interaction network based gene-weighting approach that finds clusters of both enriched and non-enriched pathways limited to the experiments' resultant gene list; and finally the de novo Consolidation method uses several measurements of pathway similarity, that finds static pathway clusters independent of any given experiment. We demonstrate that the three consolidation methods provide unified yet different functional insights of a resultant gene set derived from a genome-wide profiling experiment. Results from the methods are presented, demonstrating their applications in biological studies and comparing with a pathway web-based framework that also combines several pathway databases. Additionally a web-based consolidation framework that encompasses all three methods discussed in this paper, Pathway Distiller (http://cbbiweb.uthscsa.edu/PathwayDistiller), is established to allow researchers access to the methods and example microarray data described in this manuscript, and the ability to analyze their own gene list by using our unique consolidation methods. By combining several pathway systems, implementing different, but complementary pathway consolidation methods, and providing a user-friendly web-accessible tool, we have enabled users the ability to extract functional explanations of their genome wide experiments.

  15. Statistical analysis of microgravity experiment performance using the degrees of success scale

    NASA Technical Reports Server (NTRS)

    Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.

    1994-01-01

    This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.

  16. National Institutes of Health Pathways to Prevention Workshop: Methods for Evaluating Natural Experiments in Obesity.

    PubMed

    Emmons, Karen M; Doubeni, Chyke A; Fernandez, Maria E; Miglioretti, Diana L; Samet, Jonathan M

    2018-06-05

    On 5 and 6 December 2017, the National Institutes of Health (NIH) convened the Pathways to Prevention Workshop: Methods for Evaluating Natural Experiments in Obesity to identify the status of methods for assessing natural experiments to reduce obesity, areas in which these methods could be improved, and research needs for advancing the field. This article considers findings from a systematic evidence review on methods for evaluating natural experiments in obesity, workshop presentations by experts and stakeholders, and public comment. Research gaps are identified, and recommendations related to 4 key issues are provided. Recommendations on population-based data sources and data integration include maximizing use and sharing of existing surveillance and research databases and ensuring significant effort to integrate and link databases. Recommendations on measurement include use of standardized and validated measures of obesity-related outcomes and exposures, systematic measurement of co-benefits and unintended consequences, and expanded use of validated technologies for measurement. Study design recommendations include improving guidance, documentation, and communication about methods used; increasing use of designs that minimize bias in natural experiments; and more carefully selecting control groups. Cross-cutting recommendations target activities that the NIH and other funders might undertake to improve the rigor of natural experiments in obesity, including training and collaboration on modeling and causal inference, promoting the importance of community engagement in the conduct of natural experiments, ensuring maintenance of relevant surveillance systems, and supporting extended follow-up assessments for exemplar natural experiments. To combat the significant public health threat posed by obesity, researchers should continue to take advantage of natural experiments. The recommendations in this report aim to strengthen evidence from such studies.

  17. The electronic Rothamsted Archive (e-RA), an online resource for data from the Rothamsted long-term experiments.

    PubMed

    Perryman, Sarah A M; Castells-Brooke, Nathalie I D; Glendining, Margaret J; Goulding, Keith W T; Hawkesford, Malcolm J; Macdonald, Andy J; Ostler, Richard J; Poulton, Paul R; Rawlings, Christopher J; Scott, Tony; Verrier, Paul J

    2018-05-15

    The electronic Rothamsted Archive, e-RA (www.era.rothamsted.ac.uk) provides a permanent managed database to both securely store and disseminate data from Rothamsted Research's long-term field experiments (since 1843) and meteorological stations (since 1853). Both historical and contemporary data are made available via this online database which provides the scientific community with access to a unique continuous record of agricultural experiments and weather measured since the mid-19 th century. Qualitative information, such as treatment and management practices, plans and soil information, accompanies the data and are made available on the e-RA website. e-RA was released externally to the wider scientific community in 2013 and this paper describes its development, content, curation and the access process for data users. Case studies illustrate the diverse applications of the data, including its original intended purposes and recent unforeseen applications. Usage monitoring demonstrates the data are of increasing interest. Future developments, including adopting FAIR data principles, are proposed as the resource is increasingly recognised as a unique archive of data relevant to sustainable agriculture, agroecology and the environment.

  18. SAMSA2: a standalone metatranscriptome analysis pipeline.

    PubMed

    Westreich, Samuel T; Treiber, Michelle L; Mills, David A; Korf, Ian; Lemay, Danielle G

    2018-05-21

    Complex microbial communities are an area of growing interest in biology. Metatranscriptomics allows researchers to quantify microbial gene expression in an environmental sample via high-throughput sequencing. Metatranscriptomic experiments are computationally intensive because the experiments generate a large volume of sequence data and each sequence must be compared with reference sequences from thousands of organisms. SAMSA2 is an upgrade to the original Simple Annotation of Metatranscriptomes by Sequence Analysis (SAMSA) pipeline that has been redesigned for standalone use on a supercomputing cluster. SAMSA2 is faster due to the use of the DIAMOND aligner, and more flexible and reproducible because it uses local databases. SAMSA2 is available with detailed documentation, and example input and output files along with examples of master scripts for full pipeline execution. SAMSA2 is a rapid and efficient metatranscriptome pipeline for analyzing large RNA-seq datasets in a supercomputing cluster environment. SAMSA2 provides simplified output that can be examined directly or used for further analyses, and its reference databases may be upgraded, altered or customized to fit the needs of any experiment.

  19. [Benefits of large healthcare databases for drug risk research].

    PubMed

    Garbe, Edeltraut; Pigeot, Iris

    2015-08-01

    Large electronic healthcare databases have become an important worldwide data resource for drug safety research after approval. Signal generation methods and drug safety studies based on these data facilitate the prospective monitoring of drug safety after approval, as has been recently required by EU law and the German Medicines Act. Despite its large size, a single healthcare database may include insufficient patients for the study of a very small number of drug-exposed patients or the investigation of very rare drug risks. For that reason, in the United States, efforts have been made to work on models that provide the linkage of data from different electronic healthcare databases for monitoring the safety of medicines after authorization in (i) the Sentinel Initiative and (ii) the Observational Medical Outcomes Partnership (OMOP). In July 2014, the pilot project Mini-Sentinel included a total of 178 million people from 18 different US databases. The merging of the data is based on a distributed data network with a common data model. In the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCEPP) there has been no comparable merging of data from different databases; however, first experiences have been gained in various EU drug safety projects. In Germany, the data of the statutory health insurance providers constitute the most important resource for establishing a large healthcare database. Their use for this purpose has so far been severely restricted by the Code of Social Law (Section 75, Book 10). Therefore, a reform of this section is absolutely necessary.

  20. Improving links between literature and biological data with text mining: a case study with GEO, PDB and MEDLINE.

    PubMed

    Névéol, Aurélie; Wilbur, W John; Lu, Zhiyong

    2012-01-01

    High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/

Top